Five Paradigm Shifts in Programming Language Design and their Realization in Viron, a Dataflow Programming Environment

An old POPL paper from 1982:

We describe five paradigm shifts in programming language design, some old and some relatively new, namely Effect to Entity, Serial to Parallel. Partition Types to Predicate Types. Computable to Definable, and Syntactic Consistency to Semantic Consistency. We argue for the adoption of each. We exhibit a programming language, Viron, that capitalizes on these shifts.

Summaries of the shifts (from paper):

  • From Effect to Entity. Large objects are made as mobile as small, so that they can be easily created, destroyed, and moved around as entities instead of being operated on piecemeal as a static collection of small objects.
  • From Serial to Parallel. Constructs intended for parallel computation are also used to subsume both serial and data constructs, achieving a simplification of the control structures of the language as well as providing a framework for organizing the data structures.
  • From Partition Types to Predicate Types. The partition view of types partitions the universe into blocks constituting the atomic types, so that every object has a definite type, namely the block containing it. The predicate view considers a type to be just another predicate, with no a priori notion of the type of an object.
  • From Computable to Definable. Effectiveness is a traditional prerequisite for the admission of constructs to programming languages. Weakening this prerequisite to mere definability expands the language’s expressiveness thereby improving communication between programmer and computer.
  • From Syntactic Consistency to Semantic Consistency. Consistency is traditionally enforced by syntactic restriction, e.g. via type structure. A recently developed alternative popularized by Dana Scott takes the dual approach of resolving inconsistencies via semantic expansion as done for models of the untyped lambda calculus. We argue that the latter approach simplifies language structure.

Nothing like that could get into POPL anymore :) What are the paradigm shifts going on today?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Yes, definable!

I love the computable to definable transition, which is basically the opposite of the philosophical push towards constructive logic. Nearly every application of theorem proving to programming incorporates definitions which are used only for proofs, and thus do not need to be run. If these are easier to express in a definable but not computable manner, why not make it easy on ourselves?

Not Computable = Imprecise

The problem is, if something is not computable then it has no precise definition.

Oops, misthreaded

I meant to respond but posted below instead.

Huh?

I mean, what?

Guessing here, but I think

Guessing here, but I think he means that a definition is incomplete if it can't produce an instance of the construction. I'm not sure "precise" is the right word, but I can sort of see it in that proofs of existence via LEM and other non-constructive means might give you some sense of a construction's shape, but they don't give you all the details. Hence, "imprecise".

Definitions

Mathematical proofs are imprecise because they rely on unspecified domain knowledge and appeals to intuition. All proofs eventually reduce to a "because its obvious" appeal to intuition. The benefit with computable proofs is that there is at least a formal definition of what that minimal system is.

That doesn't seem quite

That doesn't seem quite right. Let's not confuse computability with formal proofs. You can have mechanized proofs (that are certainly precise) that use non-computable ideas — using classical logic! See e.g. Isabelle, or even Coq/Agda with extra axioms.

The imprecision you describe can be fixed by more rigorous proofs and more complete axiom systems. Hilbert among others showed that you can write geometric proofs without appeal to intuition, and gave a sufficient list of axioms for geometry (including many that Euclid did not make explicit, because he used intuition instead).

Soon after, he and others made it possible (at least in principle) to write formal proofs.

But computability is used in the metatheory of logic — specifically, in proof theory.
Since classical logic gives definitions to things that you can't touch — even infinities, one might fear problems. That's why proof theory was born: to show that logic "works" using less objectionable techniques, that is finitistic methods. And as far as I understand (which is not enough), finitistic methods restrict to syntactic objects (close enough to ML datatypes) and computable definitions. (Unfortunately, you can't restrict to clearly finitistic methods to prove that, e.g., Peano arithmetic is consistent — you need to go a bit beyond).

(Sorry for the lack of references).

You can have mechanized

You can have mechanized proofs (that are certainly precise) that use non-computable ideas — using classical logic! See e.g. Isabelle, or even Coq/Agda with extra axioms.

Well, but the extra axioms are computable. They might be right, just as the more traditional, uncontroversial axioms might be right; establishing whether or not they're right isn't a computability issue. To get axioms that are actually non-computable, you'd need an oracle — a handy-dandy peripheral device that you plug into your Turing machine that, when asked a question, gives an answer that it gets by some unknown means that might not be achievable using a mechanical device (perhaps it consults a djinn or deity). By plugging a suitable oracle O into a Turing machine, you can solve the halting problem — for ordinary Turing machines. An O-oracle Turing machine cannot solve the halting problem for O-oracle Turing machines. Likewise, an O-oracle Turing machine, acting as a formal system of logic capable of addressing claims about the behavior of O-oracle Turing machines, cannot escape Gödel's Theorems; it can't be simultaneously consistent and complete; and if it's sufficiently powerful, and can prove itself consistent, then it can also prove itself inconsistent.

Hm

There probably is a disagreement about what "computable" means, but in the sense used in Blaisorblade's remark those extra atoms really are not computable.

One way to think of the treatment of axioms in Coq is as free variables: saying at the beginning of a proof of P that you import the excluded-middle axiom is a sort of syntactic sugar to abstract over a variable of type (forall A, A \\/ ~A). You can also directly write a proof of the function type ((forall A, A \\/ ~A) -> P), of the form fun lem -> ..., and that will have (roughly) the same status.

On this basis, when Blaisorblade means that the axiom is not computable, he means that uses of the free variable corresponding to the axiom's abstraction block cut-elimination, that is, computation. There is a canonicity property that says that a closed term of type Nat is either O, or of the form (S n), where n is a closed term of type Nat. No such thing is true, of course, of closed terms of type ((forall A, A \\/ ~A) -> Nat), which may be of the form fun lem -> case lem X of ... for example. By "computable" in this context, people typically mean either:

1. has a corresponding reduction/computation rule
2. has a computation rule that preserves the canonicity property that allows us to know that normal forms are really the expected values of the type

Now there are some axioms for which (1) or (2) may be true, which means that you *can* give them a computational character. For example, proof-irrelevance as defined in Coq (with the separate Prop universe) is computable in the strong sense (2), and excluded-middle can be given a computational meaning in the weaker sense (1). (I'm personally of the opinion that classical logic is constructive in the sense that is has an interesting computational interpretation (1), but some people are only content with sense (2).)

In any case, this is not what Blaisorblade was talking about. The axiom-as-abstraction mechanism blocks computation and are "not computable" in this sense, and this (simple, almost trivial) idea can be applied to any assumption (even inconsistent ones) regardless of whether we know how to make them "computable" in any of those senses.

In other words, you can write perfectly precise proof of P under the axiom A, by just writing a (perfectly precise) intuitionistic (of whatever you prefer) proof of (A -> P), and this works well in practice.

(The other question is whether other people will find the proof of (A -> P) *interesting*, and for this showing that A is computable in sense (1) or (2) definitely helps.)

Base Axioms

But in what formal system do you define the axioms? In any logic how are implication, conjunction and disjunction defined? If proofs are not computable (that is the axiom system is encoded on real physical computational hardware, like a Prolog interpreter running on a PC), then the definitions of these terms is in human language, normally English, and subject to all the philosophical problems of meaning (see Wittgenstein's language games). I can't even define what I mean by meaning.

Are you happy with computable/decidable proof-checking?

As gasche explained in detail, we are probably using "computable" in different senses. However, I also talked about proof theory. Proof-checking is usually expected to be computable (technically, decidable), even for classical proofs — I guess that's one required result of the proof theory of a logic.

Do we agree that, if for a logic we have decidable proof checking (and we can write a proof checker), we have defined clearly what's a formal proof for that logic and what isn't? I hope so. For practical purposes, I think that can be enough.

We can now debate whether we have defined the meaning of, say, implication in this logic — is it enough to know what's a formal proof of implication? However, now we're firmly in the realm of philosophy. If you answer yes, you are buying what is called "proof-theoretic semantics".
However, this question has a parallel in programming languages.

Some people are happy to know the operational semantics of a language (say, a small-step semantics). But some argue that knowing how a construct behaves (say, reduces) does not tell you whether it has a real meaning, in terms of something other than syntax — according to them, you need a denotational semantics for that!

Historically, the semantics of logics used to be (analogous to) denotational semantics — that's (I guess) why the latter is also called mathematical semantics. However, there's also an analogous of operational semantics — and that's proof-theoretic semantics! See also http://plato.stanford.edu/entries/proof-theoretic-semantics/; the author of that article sits one floor down, though I didn't have enough chance to take advantage of this.

Mechanical Proof Checking and Meaning by Experimentation

I think we do agree. Even with formal proof checking I would want it done by a machine which is independent of human language.

I would now be happy to define implication in this logic, we define a term and give it a clear and unambiguous interpretation.

I can check your proof if I have an identical machine, and I can understand its meaning by experiment.

I'm not sure i understand what is meant by definable

...
You build often build different mathematical systems by choosing different axioms, but since the system can be consistent with either an axiom or its negation, it's clear that the axiom isn't constructable from the other axioms.

So all axioms are definable but not constructable.

So this means we can use axioms?

The vast majority of mathematicians disagree

That is the constructive logic view, but all but a vanishingly small fraction of mathematicans would disagree. The same goes for nearly all theoretical computer scientists outside of PL. Classically, a precise definition is one for which any two solutions are equal, irrespective of whether there is a finite algorithm which produces the answer.

Even constructively, it is true that no program proven safe using classical logic will ever segfault: constructive logic proves that classical proofs are valid for any negative statement.

From Computable to Definable

None of the comments so far on this topic seem to have taken into account the concluding paragraph of my treatment of this particular "paradigm shift" in the Viron paper, so let me repeat it here.

"There is a feeling in some programming circles that the burden of performance should be placed on the compiler. This is possible up to a point, although no compiler can assume the full burden, since there are always new algorithms to be discovered. Our position is that exactly this situation holds for effectiveness as well as for performance. A compiler can deal with some of the issues of finding an effective way to execute a program, but no one compiler can discover every such effective way on its own, it must sometimes depend on the programmer. Just as the impossibility of the perfect optimizer does not imply the uselessness of optimizers, so does the impossibility of the perfect automatic programmer not imply the uselessness of compilers that can find effective methods in many cases."

Perfect Optimiser

I agree that the perfect optimiser is an impossibility. In some ways predictability of the translation of a language is more important than optimisation. I can reason about access patterns and refactor code to perform better if I am aware of the cost of each operation. It is much harder to finesse the optimisation of code where you are trying to persuade the optimiser to output particular code without the transparency to know what is going on.