Relevance of Curry-Howard

Hi there,

I have read a lot about the Curry-Howard isomorphism, but its relevance is not yet clear to me. I know there are experts on this topic here in this board, so please enlighten me :-)

Technically, I understand the relation between simply-typed lambda calculus and constructive logic and how the isomorphism between proofs and programs works.

However, what does Curry-Howard mean in general? Why is it significant? Does it say anything about a setting different from simply-typed lambda calculus/constructive logic?

I understand that one interpretation of CH is this: If I have a function of type t -> t', then the function body is a constructive proof that it is possible to construct a t' value from a t value.

However, this "type inhabitation" question seems to have a trivial answer in almost any real programming language: You can construct a value/function for whatever type you want (e.g., in Haskell you can just use "undefined")

I have never seen any type in a program that expresses an interesting, non-trivial proposition, but maybe I have not looked closely enough?

Would it be possible to take an arbitrary mathematical theorem (let's say: "string concatenation is associative", or Fermat's little theorem) and express it as a type?

I am also still a bit confused about conceptually viewing a type as a proposition. I think of types (of functions) as partial specifications, whereby the type checker checks conformance of the implementation to this partial specification. How does this fit together?

Comment viewing options

Curry-Howard and Typical Programming Languages

You are correct in your observations that given most programming languages (even those such as Haskell), it is difficult to see exactly how Curry-Howard is useful.

I have never seen any type in a program that expresses an interesting, non-trivial proposition, but maybe I have not looked closely enough?

The reason for this is because most of the typical languages used for programming, which are statically, strongly typed, still do not have a powerful enough type system to make interesting use of Curry-Howard. The languages are simply not expressive enough. What you really need are languages with dependent types (or something approximating them). Some examples of these might be Cayenne, Omega, Epigram, etc. However, none of these languages are far enough along to be used for any production level stuff.

Would it be possible to take an arbitrary mathematical theorem (let's say: "string concatenation is associative", or Fermat's little theorem) and express it as a type?

Yes, and in fact this is exactly what happens in automated theorem provers based around constructive type theory, or the calculus of (co-)inductive constructions. A good example of this is Coq. I would suggest reading through some of the tutorials/introductory documentation if you wish to learn more. Also, there is a good book on the topic (on Coq, Curry-Howard, and related issues): here.

blah blah blah dependency injection blah blah blah

(This is a bit off-topic, but it came to mind, so what the hell.)

You are correct in your observations that given most programming languages (even those such as Haskell), it is difficult to see exactly how Curry-Howard is useful.

I recently stumbled across someone mentioning something called "dependency injection". I didn't know what it was, so I googled (I guess this is lowercase nowadays!) it and read Martin Fowler's article on it. It is a bit on the long side, and I kept waiting for the punch-line; you know, the point at which the author hits you with the insight which justifies the preceding verbosity and the hi-tech-sounding name ("dependency injection" — I can't help but think of "fuel injection", and gleaming motor engine showcases), but it seemed indefinitely postponed. And in the end, it turned out that "dependency injection" just means "abstraction" specifically by parametrization, by update and by what I think amounts to type abstraction plus update. (Apparently these are called — I kid you not — type 3 IoC, type 2 IoC and type 1 IoC...!)

To me this all seemed rather obvious and it got me thinking about why it isn't obvious to the author or his readership.

In Haskell, if I am given some type B which I need to produce somehow, and I realize that the B-values I need depend on some other values of type A, the first thing I do is write down "f :: A -> B". Then I write down "f a =", and then I start writing stuff after the equals sign until I have what I need. I do that because I know once I have the type that if there is an inhabitant of the type "A -> B" it can be expressed as "\a -> b" for some b, so the "f a =" part is always part of my solution and I will never have to change that unless I want to. So once I've written that down I feel one step closer to my solution.

I know that for three reasons. First, because of my experience as a functional programmer. Second, because it is part of the universal property of exponentials ("factors uniquely"), that is, of function types. And third, because by the Curry-Howard correspondence with natural deduction, I can start any proof of A which depends on B by assuming A, that is, adding it as a hypothesis.

So, why is it so obscure in Java? I think part of the reason is that in Java you have update, so there are complications and additional solutions. But part of the reason is also that it largely lacks structural typing, and that makes it hard to see that a class('s interface) is a product of exponentials. (With nominal typing, you tend to think of a class by its name, rather than its structure.)

You could also blame the syntax of method signatures, which obscure the relationship with exponentials and implication. But is the syntax the cause or just a symptom? (You know what I think about syntax...) If CH could be readily applied to Java, perhaps Java's designers would have chosen a more suggestive syntax. But even if they had decided to stick anyway with C-style syntax, the idea of using abstraction to handle dependencies would have been more obvious.

Inheritance?

a class('s interface) is a product of exponentials

I suspect a "recursive record" is more appropriate to model both mutability and inheritance.

But I share your reaction towards IoC - that's (just) a Java's way of abstraction by parametrization, and the manner in which it's announced as the newest and best idea makes me feel angry (sometimes) and sad (more often).

Coalgebraic models

I suspect a "recursive record" is more appropriate to model both mutability and inheritance.

That is, in fact, not far from what I had in mind. In the coalgebraic model of OO, a class is a fixpoint of a product of exponentials.

B. Jacobs, Objects and classes, co-algebraically. In: B. Freitag, C.B. Jones, C. Lengauer, and H.-J. Schek (eds) Object-Orientation with Parallelism and Persistence Kluwer Acad. Publ., 1996, p. 83--103.

Objects & Coalgebra

Hamlet, I was recently looking into some papers by Jacobs re: coalgebra. I got interested because I was looking at an implementation of logic formulas using objects, which seemed quite awkward, knowing that they are quite elegantly expressed using algebraic datatypes. Am I on the right track here? Can there be an impedance mismatch due to algebraic vs coalgebraic nature of things? Is this described somewhere?

Bialgebras

Flawed one, there is another paper of Jacobs' which might be worth looking at in this connection, although it is rather heavy on the math.

B. Jacobs, A Bialgebraic Review of Regular Expressions, Deterministic Automata and Languages. To appear in LNCS 4060. Technical Report ICIS-R05003, Institute for Computing and Information Sciences, Radboud University of Nijmegen.

I have only gotten through the first part of it, but the gist seems to be that bialgebras connect the denotational and operational descriptions which algebras and coalgebras respectively give of an entity. Regular expressions form an algebra in an evident manner. Likewise, regular automata form a coalgebra in an evident manner. (For the latter, I highly recommend:)

J.J.M.M. Rutten. Universal coalgebra: a theory of systems. Technical Report CS-R9652, CWI, Amsterdam, 1996.

Bialgebras express relationships between the two.

If you need to program in an OO language, you might be better off concentrating on an operational model of the logic you are interested in, since the relation between OO and coalgebra suggests that OO languages are better suited for expressing operational concepts. Combining Jacobs' view of classes as coalgebras and Rutten's view of automata as coalgebras, this would mean modeling an automaton as a (purely functional) class. But, first, of course, you need to understand operational models of your logic as automata. In this respect, maybe the bialgebras paper could be useful... although, I doubt it: it's rather difficult.

Beyond that, there is not much I can offer. If I needed to model formulae (= syntactic, denotational) in a standard OO language, I guess I would fall back on the visitor pattern.

To me this all seemed rather

To me this all seemed rather obvious and it got me thinking about why it isn't obvious to the author or his readership.

Quite. I had the same experience recently, and asked the guys here for help...

I think the problem in this case isn't that Martin Fowler and his troops don't know about CH, though... ;-)

Knowing his audience

To me this all seemed rather obvious and it got me thinking about why it isn't obvious to the author or his readership.

I suspect that Fowler is more aware of the source of these ideas than it may seem (he's an old Smalltalker); he has simply found a way to explain the ideas to his audience in language they understand (buzz words ;-))

I agree that anyone who has done FP , and who understands how to apply the the insights into program design that FP can give you, will find the whole idea obvious, but the culture and practices of most of the programming community are horribly out of touch with those ideas.

On the positive side, this is an instance of what many of us here have wanted for a long time: FP ideas starting to influence industrial software development. Admittedly, it is still early days yet...

No FP needed

I expect people who passed functions around as arguments even in languages without first class functions (i.e., closures) to understand these things intuitively. Even in C you can do this using pointers to functions. You can do similar things with generics in other languages (e.g., Ada) etc. The issue that comes up after you realize how easy and productive this is, is dealing with state of course. But then, if you are an OO programmer you should know about OO application frameworks, where exactly the same issues come up.

No FP experience needed in order to understand theese techniques. A competent OO programmer should find all of them quite natural.

FP is just a reliable road

No FP experience needed in order to understand these techniques. A competent OO programmer should find all of them quite natural.

We agree they SHOULD, but that doesn't mean they (people who consider themselves competent) DO.

I'm not saying that understanding FP is a necessary condition to understand abstraction, just a sufficient one; FP forces you to wrestle with the fundamental nature and uses of abstraction, and once you understand these, you can apply them in other contexts, such as OO.

Remember that most "OO" is still done by people who understand it in the C++ way, as old-fashioned procedural code somewhat organized into classes: still a step in the right direction, but not leading to a deep understanding of abstraction.

Fowler is doing a pretty good job of introducing these ideas to this very large constituency that haven't really grasped them before. (Though I'm not as impressed with some of what is being done with the ideas, but hey, you can't have everything. ;-} )

I just meant, that they

I just meant, that they should have learned their OOP from me, of course...

Enrollment is up...

Do you really want to have to mark that many students? ;-)

Good point...

However, what does

However, what does Curry-Howard mean in general? Why is it significant? Does it say anything about a setting different from simply-typed lambda calculus/constructive logic?

The basic idea of the Curry Howard isomorphism is that every significant feature of a programming language has a corresponding logical feature, and vice versa:

types       == propositions
programs    == proofs
evaluation  == cut-elimnation
recursion   == induction
termination == consistency


Expanding out "types == propositions", we can further refine the idea with:

unit type         == truth
void type         == falsity
pair types        == conjunction
sum types         == disjunction
function types    == implication
dependent types   == first-order quantification
polymorphic types == second-order quantification


I understand that one interpretation of CH is this: If I have a function of type t -> t', then the function body is a constructive proof that it is possible to construct a t' value from a t value. However, this "type inhabitation" question seems to have a trivial answer in almost any real programming language: You can construct a value/function for whatever type you want (e.g., in Haskell you can just use "undefined")

This is a good observation! If you look above, you'll see that logical consistency is identified with termination. Since you can write nonterminating functions in real programming languages, this means that considered as a logic, Haskell and ML are inconsistent. If you restrict yourself to terminating expressions, then you can read Haskell programs as valid logical proofs.

I have never seen any type in a program that expresses an interesting, non-trivial proposition, but maybe I have not looked closely enough? Would it be possible to take an arbitrary mathematical theorem (let's say: "string concatenation is associative", or Fermat's little theorem) and express it as a type?

The logic of Haskell or ML is not all that strong, so you can't really state any very interesting theorems with them. If you look at theorem proving systems like Coq or Twelf, then you'll see very sophisticated theorems being stated and proven. (For example, last year Georges Gonthier formalized the Four-Color Theorem in Coq, and at CMU a machine-checked soundness proof for Standard ML was built.)

I am also still a bit confused about conceptually viewing a type as a proposition. I think of types (of functions) as partial specifications, whereby the type checker checks conformance of the implementation to this partial specification. How does this fit together?

The more powerful the type system, the richer the set of specifications you can write. In a system like Coq, you can express any spec expressible in constructive set theory as a type constraint. As the types of your functions get more precise, the trickier it is to come up with a function that satisfies them, of course.

When the type system is

When the type system is expressive enough to allow useful proofs, how do you know that the type declarations are correctly proving the behaviour that you want the program to have?

Do type theory researchers expect there to be a point beyond which static type systems will become so complex that the type declarations themselves will probably contain errors?

If so, how will programmers "debug" their type declarations?

Already, in Haskell, I find that I spend most of my time getting my type declarations correct so that the type checker allows my program to compile and very little time debugging the program at runtime. However, the tools for "debugging" the type inference algorithm are so primitive compared to modern debuggers and test tools that I find development to be slower than in a dynamically typed language. I end up adding explicit type declarations into the program in a binary chop style until I find the type error. That's not really any different from adding print statements into a program for debugging, something nobody does these days.

Debugging Specifications

A couple of points:

1) My prefered debugging mechanism is print statements. :-) This technique is far from dead.
2) Writing down "wrong" specifications is a real problem. However, it usually becomes apparent that you are trying to prove the wrong thing while constructing the proof. Either the proof is a lot easier than you expected, or you can't make the proof go through. Occasionally, you just prove the wrong thing and don't realize it until later, when some other proof doesn't go as expected.
3) The ultimate goal is to have an easily understantable set of definitions which flow to an easily understandable set of interesting theorems, and a belief that the theorems are true given the definitions. This paper has some interesting thoughts about this.

There's been some talk of

There's been some talk of this in #haskell - code making use of all of GHC's extensions can easily get to the point where more feedback's useful. One proposal's for the compiler to dump out the typing (that is, the sequence of steps the typechecker took) so another application can let the programmer browse through it and see what was inferred from where and what was never inferred at all.

When the type system is

When the type system is expressive enough to allow useful proofs, how do you know that the type declarations are correctly proving the behaviour that you want the program to have?

By proving it! Let me give an example of how this works.

Suppose I'm trying to show that a compiler optimization preserves some invariant of your IR (eg, you never jump to the middle of a block). So I define a type for the terms of the IR, and define another type for well formed IR expressions, and write down an optimize function that transforms an IR expression.

As you rightly observe, there's no guarantee -- yet! -- that this will have the desired property. So what we do is define another function, and give it an IR term t and a term showing that it's well-formed, and return a term showing that the result optimize function will also be well-formed, So the safety proof will itself be a function, with a type like:

safety : (t : IR) -> (well_formed t) -> (well_formed (optimize t))


Do type theory researchers expect there to be a point beyond which static type systems will become so complex that the type declarations themselves will probably contain errors?

Types are a convenient way of saying what the invariants of a program are. If you're new to a domain, then obviously you're going to be unclear on what the relationships between domain concepts are, and you're inevitably going to make errors. A type checker will find inconsistencies in your understanding, and this will help you zoom in on the right things faster.

Happily, complex type declarations can be a pay-as-you-go kind of mechanism. You only need a complex declaration if you need to show a complex property. In the compiler example, there's no need to define the safety function until you actually need it.

If so, how will programmers "debug" their type declarations?

Good question. Nobody really knows yet, because we don't have a lot of experience programming in languages with complex type systems yet. I think of it as a research opportunity. :)

Umm, isn't there a gotcha in that one...

By proving it! Let me give an example of how this works.

I don't think you have proved it yet, don't you _also_ have to prove that the functions terminate.... a non-trivial problem.

Normalizing Languages

Many type systems only admit terminating programs, until you provide a general recursion operator. If you use one of these languages (Epigram, to name one I've used) you don't have to worry about separate termination proofs.

I think this is where Godel steps in...

...an says a type system that only admits terminating programs is insufficiently expressive to state all mathematical truths.

But let me be the first to say that there may well be sufficiently many truths of practical importance to make such type systems valuable.

When the type system is

When the type system is expressive enough to allow useful proofs, how do you know that the type declarations are correctly proving the behaviour that you want the program to have?

This situation I parodied here. There is always a point where some informal specification will be formalized. "Mistakes" can always be made at this point. There is obviously no formal solution to this.

Do type theory researchers expect there to be a point beyond which static type systems will become so complex that the type declarations themselves will probably contain errors?

They can only be wrong with respect to some specification. For powerful type systems most likely the types themselves will be the (formal) specification and thus cannot be wrong; they define what is right. However, with respect to the informal specification (what the user wants) see above.

If so, how will programmers "debug" their type declarations?

Again, you can add a "higher" layer in which to formalize the necessary things, but this will just move the problem somewhere else. Without that higher level, as someone else said, you'd pretty much just have to assume it's right until it keeps you from creating code that you know is correct, or allows you to write code that you know is wrong.

Isn't that the wrong way round?

For powerful type systems most likely the types themselves will be the (formal) specification and thus cannot be wrong; they define what is right. However, with respect to the informal specification (what the user wants) see above.

Surely what the user (strictly customer) wants cannot be wrong. They're paying the bills after all.

The type declarations should allow us to prove that the program does what the customer has asked for.

Types that can only prove that the program does what the type system says it does are surely tautological.

Surely what the user

Surely what the user (strictly customer) wants cannot be wrong.

That was my intent, but you need a formal specification to talk about verification. The formal specification defines what is correct with respect to the verifier.

The type declarations should allow us to prove that the program does what the customer has asked for.

And what is it the customer asked for? To check anything (automatically/mechanically) you need to first formalize it. There is no formal system that can check that the program does what the customer wants, at least not until we know a heck of a lot more about the human mind.

I think we're talking past each other.

Yes, we have to use a formal notation to be able to use automatic proofs.

The question was, to be useful must the formal notation itself be so complex that specifications will probably contain errors?

And if so, how will programmers debug those specifications?

And, that makes me think, how is that going to be any different from debugging programs themselves?

As long as specifications

As long as specifications are less complex than implementations then you are trading up to something better. And it shouldn't be that surpising that a specification will usually be simpler than an implementation - the specification simply has to state what should occur, while an implementation must spell out the details of how that is to be done. Take a look at algebraic specification, which does a fine job of abstracting details of the specification from details of implementation: there may be many models that satisfy the specification, the result of the particular details of implementation.

Or consider it this way: which would you rather debug: Machine code or assembler? Assembler or C? C or some higher level language? A specification can be thought of as a higher level description again (one that doesn't have to worry about finicky details like actually performing any calculation). Why would you expect a specification to be as hard or harder to debug than an implementation? Given that it doesn't have to say as much (it only has to say what, not how) I would expect a specification to almost always be simpler.

Perhaps we're talking past each other because given the point I was emphasizing the answer is pretty much obvious. A complex informal specification will certainly produce a (complete) formal specification just as complex and probably more so, possibly much more so. It is unlikely that even any objective but ambiguous form of the informal specification will be correct so we can be certain that the formal specifications will contain errors, much as noone seriously believes any large program is bug-free (incidentally, programs are complete formal specifications). We can pretty much exclude the case of simple informal specifications as such are rare and not particularly interesting except that the formal specification of them should not be excessively complex. But to address your latter questions, as my linked to parody suggests, the answer to your last question is "It isn't."

I guess we're in violent agreement

I realise that programs are formal specifications. There are a great many tools that help programmers test and debug programs at runtime. There are very few tools to help programmers test and debug complex static type declarations or specifications. My experience is that the current state of the art in statically typed languages makes understanding and fixing errors harder than in dynamically typed languages. Many more errors get caught at compile time but there are few tools to help you diagnose the problem.

Is there any research to address this problem of diagnosis?

Type checking is a tool

I must be missing something, but isn't static type checking itself "a tool for understanding errors" which "helps you to diagnose the problem"? (This is in contrast to dynamic checking, which just detects errors.) So it seems you are rather arguing about the user interface of that tool...

Exactly

Programming is nothing but a user interface problem.

If we could easily understand our programs at the machine level we wouldn't need high level languages, type systems, debuggers and IDEs.

Type debugging

I realise that programs are formal specifications. There are a great many tools that help programmers test and debug programs at runtime. There are very few tools to help programmers test and debug complex static type declarations or specifications.

One issue is that most current type systems are nowhere near as complex or expressive as programming languages. Type systems themselves typically only allow rather trivial "errors" in their types.

My experience is that the current state of the art in statically typed languages makes understanding and fixing errors harder than in dynamically typed languages. Many more errors get caught at compile time but there are few tools to help you diagnose the problem.

Is there any research to address this problem of diagnosis?

I didn't mention what I'll mention momentarily before because I thought you were trying to say that getting the type declarations/definitions right was the issue and I was responding that it is difficult (or usually unnecessary) to have a notion of "right" for them. But it seems that you are more saying that the connection between the type error and the programming error is often unclear. This however, is a well recognized problem which has had research thrown at it, especially fairly recently.

Two examples are: Chameleon's Type Debugger (which also works for Haskell)
and: the technique called type error slicing

And of course there is research on making type errors clearer.

This is by no means an exhaustive listing.

You are going to enjoy this

You are going to enjoy this paper. I sure did.

Why soitenly!

Oh, it's Curry-Howard, not Curly Howard. Sorry. Never mind.

Now see ...

that's funny!

Hmm, what about "multi-stooged languages?" (Three being sufficient for most purposes ... or is that "poiposes"?)

Nyuk, nyuk, nyuk!

It might be the only language with a "Y you..." combinator.

Wow

Now this is some seriously inside humor!

However, what does

However, what does Curry-Howard mean in general? Why is it significant? Does it say anything about a setting different from simply-typed lambda calculus/constructive logic?

In order: That there is an embed-project pair between the simply-typed lambda calculus and intuitionistic type theory. The idea, generalized, provides a bridge connecting logic and computer science; this is very significant. Technically no, but there are "Curry-Howard" like correspondences for various logics; CH is used as a catch-all for the general idea.

However, this "type inhabitation" question seems to have a trivial answer in almost any real programming language: You can construct a value/function for whatever type you want (e.g., in Haskell you can just use "undefined")

This is why CS people have been getting more out of CH than logic people, that and logic is at least twice as old as CS. Or maybe this is just my skewed perspective. That said, plenty of computer science concerns itself with strongly normalizing languages that do fit CH exactly.

Would it be possible to take an arbitrary mathematical theorem (let's say: "string concatenation is associative", or Fermat's little theorem) and express it as a type?

It wouldn't even be hard given dependent types. As others have mentioned Coq and other proof assistants do exactly this.

I am also still a bit confused about conceptually viewing a type as a proposition. I think of types (of functions) as partial specifications, whereby the type checker checks conformance of the implementation to this partial specification. How does this fit together?

What is a "specification"? What is "checking conformance"? It would not be untoward to consider a specification exactly a proposition (or more likely a predicate).

However, addressing the general theme of your questions: "Why is CH interesting to computer scientists?" The main thing is that it immediately lets us steal tons and tons of work logicians have done, apply logician techniques and logician proofs to computer programs. For example, Kolmogorov's embedding of classical propositional logic/proofs into intuitionistic propositional logic/proofs is exactly the the typed CPS transform. Various logical constructs lead to new constructs in computer languages or much cleaner ways of reasoning about them. Methods of finding proofs become ways of deriving programs from an FP view, from an LP view logics themselves can be turned into programming languages and the methods of finding proofs in them become methods of evaluating programs. It goes the other way too, but I'm lessed versed and less interested in what the logicians are getting out of it. Results that provide bridges like this in mathematics are often the most celebrated ones.

Kolmogorov operation

Could you elaborate / point to resources on this? Google seems to be silent on this point. I can see how it might be done, but reading about it is so much easier than thinking...

Logic side

The Kolmogorov translation is a mapping K on logical formulas to logical formulas, such that a formula f is provable in the classical setting iff K(f) is provable in the intuitionistic setting. Other such translations were given by Goedel and Gentzen (and all are so called double negation translations, because they introduce double negations for every atom or subformula). I think Google should turn up with more info if you use other keywords.

It is in itself remarkable that such translations exists, even though intuitionistic logic is a subsystem of classical logic. I'm afraid I can't say anything about the types side of this.

BHK interpretation

The Kolmogorov translation is a mapping K on logical formulas to logical formulas, such that a formula f is provable in the classical setting iff K(f) is provable in the intuitionistic setting.

Not quite.

Kolmogorov interpretation is *not* about relationship between the classical and intuitionistic logic provabilty. The interpretation is an informal IL semantics definition:

To prove A or B you must either prove A or prove B.
To prove A and B, you must prove A and prove B.
To prove A -> B, you must prove B assuming A has been proved.
To prove ^A is to prove A -> absurdity.

The point of view that IL is a 'subset' of CL can be justified by the fact that any derivation in the IL is automatically a derivation in the CL just because the IL drops the law of excluded middle. No translation is required.

Thanks to the "double negation translation", one can go in the opposite direction, namely that if a formula is provable in the CL, then the double negated formula is a proof in the IL (which makes the CL a 'subset' of the IL).

Ok, but that's not what I said

Kolmogorov interpretation is *not* about relationship between the classical and intuitionistic logic provabilty.

That's not what I said. The question was about the Kolmogorov translation. You are talking about the BHK interpretation, the standard proof-interpretation semantics for intuitionistic logic due to Heyting and Komolgorov. They are different things.

(I also don't know if I entirely agree with the informal interpretation as you give it; the interpretation concerns proof objects that should be constructed, not just the possibility of proving something as you say here, especially for the case of implication.)

 The confusion is probably because of the differences in terminology, sorry about that. The embedding (which I called 'Kolmogorov translation') seems to be usually called the 'Kolmogorov operation', or simply 'double-negation translation'.

BHK

(I also don't know if I entirely agree with the informal interpretation as you give it; the interpretation concerns proof objects that should be constructed, not just the possibility of proving something as you say here, especially for the case of implication.)

You may of course disagree, but proof rather than 'construction' is what is used in the BHK formulation, comp. science profs popularization attempts notwithstanding.

CPS and Glivenko theorem

Double-negation translation (in the spirit of Glivenko theorem)
of classical into intuitionistic logic is very closely related
to CPS. Perhaps you might find relevant

http://pobox.com/~oleg/ftp/Scheme/logic.scm

It uses the Curry-Howard correspondence (in the inverse type-checking
direction) to obtain proofs of some theorems expressed in types,
e.g., that the Law of Excluded Middle indeed holds given the
above translation.

Dual Calculus

There is a nice paper from Wadler: Call-by-value is dual to call-by-name. He presents a calculus whose type system is classic and two mappings to lambda calculus (call-by-value and call-by-name).

From my point of view, CH is

From my point of view, CH is important for at least two reasons.

First, the way many calculi are developed is that you start with a sound logic, prove some properties about it to show that it is nice, then extend it with recursion to get completeness, and then show that most of it is still nice. Doing things this way you tend to get better behaved calculi in the end. Also, it is possible to design a language in such a way that a sublanguage retains soundness; so for programs in that fragment, you get termination.

Second, type inhabitation is important, but what is even more important is cut elimination/proof normalization. Adding recursion to a typed calculus makes it trivial as a logic but not (necessarily) as a proof theory! Rewriting adds a new dimension to a logic.

I have never seen any type in a program that expresses an interesting, non-trivial proposition, but maybe I have not looked closely enough?

In Coq and Epigram, you can express complex things. But even in Haskell, you can do interesting things. For example, you can define a datatype of square matrices (invariant: it is square) or untyped lambda-terms in some context (invariant: the type says how many free variables there are); using so-called generalized datatypes, you can write simple type-safe interpreters (invariant: evaluation cannot go wrong). In Generic Haskell, I wrote a program which takes two types and infers canonical isomorphisms between them (invariant: invertible, canonical).

BTW, nat said:

Do type theory researchers expect there to be a point beyond which static type systems will become so complex that the type declarations themselves will probably contain errors? If so, how will programmers "debug" their type declarations?

To which I respond, first, types are themselves typed by kinds, which mitigates this fact, and you can imagine an infinite hierarchy of sorts; and 2) in my view (oh, I am so careful these days...!), the point of a type system is not to catch errors but to increase expressiveness (and modularity), that is, I want to be able to write down what is in my head, whether it is correct or not. Once I've written it down, it is much easier to analyze it and think about its (in)correctness.

the point of a type system

the point of a type system is not to catch errors but to increase expressiveness (and modularity)

Well said!

Type systems and expressiveness

Doesn't that depend on your definition of "expressiveness"? It is certainly the case that there are meaningful programs which are not typable.

I wouldn't express the goal of a type system to be expressiveness itself, but rather to provide a way to structure programs and to support (though not guarrantee) their correctness.

Working with types

All meaningful programs work with types. Now, there may be some programs which are not expressable (at least conveniently) with the current slate type checkers, but that is different than saying that no types are being used.

Personally, I find the ability of static type checking to catch errors as a nice side effect, but not the primary goal. In my mind, the nice thing about static guarantees coincides with the ability to locally reason about abstractions - one can be quickly overwhelmed if required to dig through code and/or comments. That is, in any large set of software, you must deal with the layers of abstraction in smaller pieces. Without knowing the types being manipulated at a local level, I have to hold too much in my mind to accomplish a specific task.

The best analogy I can give is the relational model of databases. There appears to be a reason why the structure of tables is explicitly laid out, as well as the relations between tables (foreign keys). Sure, the enforcement of referential integrity is a nice bonus, in that I can be sure that values are guaranteed to be within a certain domain. But more generally, explicitly laying out the types and relations helps me reason about the structure. Without these restrictions, I constantly have to worry about what kinds of values go where - Is this column an integer, a string, a foreign key to another table, etc...

Well, types provide the same expressiveness - they allow one to quickly discern the types being manipulated. In that sense, they should be viewed as increasing the expressiveness of the language.

Type systems and expressiveness

I'm not sure what exactly do you mean by "all meaningful programs work with types." If you mean "all meaningful programs are typable" then the sentence is false: there are programs with well-defined semantics which are not typable, and this is not simply a statement about the current state-of-the art in type-checking technology. It is a theorem (in lambda-calculus,) to put it bluntly.

Now, I am not disagreeing with the statement that types do help to reason about a program, as well as to express, i.e. describe a rich set of data-values and programs. Furthermore, they provide a way to structure a program and to model the application domain. In that sense I do agree that a type system does contribute to "expressiveness." But if the definition of expressiveness is given in terms of which programs are "legal", as in "language A is more expressive than language B iff there are some terms/programs which can be expressed in A and have well-defined meaning in A but not so in B" then type systems do indeed restrict expressiveness, rather than enhance it.

My point is that when we talk about "expressiveness" it is easy to confuse different kinds of expressiveness.

Furthermore, I do disagree with the statement that "catching errors" is just a "side-effect." Certainly types help reasoning about the abstractions of the program, but their defining characteristic is that they restrict the set of programs that are allowed. This is not a side-effect. It is the defining property.

Expressivness as used by us

Expressivness as used by us has a well known techinical meaning if the field.

Expressivness as used by us

Perhaps you could elaborate a little on what is your use of "expressiveness"?

The full quote

"the point of a type system is not to catch errors but to increase expressiveness (and modularity), that is, I want to be able to write down what is in my head, whether it is correct or not. Once I've written it down, it is much easier to analyze it and think about its (in)correctness."

Expressiveness

Sorry if I didn't make myself clear, but my question is about the technical meaning of "expressiveness" which Ehud mentions. Surely the one suggested by the quote is not the technical definition. Does anyone have a specific technical definition (or a reference?)

Ehud seems to suggest that there is a formal definition of expressiveness and that type systems increase such expressiveness (and that it is generally accepted in the field.) But I am aware of one (formal) definition of expressiveness under which type systems restrict it rather than increase it (the intuition of which I mentioned in a previous post.)

I am quite curious to find out what is the formal definition of this accepted notion of expressiveness that makes a typed language more expressive than an untypped language.

...but a good paper on the subject is Felleisen's On the Expressive Power of Programming Languages. It's been discussed a number of times on LtU, but I can't find where it was made into a story or thread on its own.

That old saw...

The theorem is, "There are untyped programs which are not in the image of type erasure." And yet typed lambda-calculus with recursion (TLC+R) is complete. For the purpose of sequential computation, at least, what is important is the latter. And it cannot be the case that there are some untyped progams with better (asymptotic) complexity, since you can write an interpreter in TLC+R and only suffer a constant time and space hit.

Consider the inclusion i of the naturals N into the integers Z. It is not surjective. And yet the cardinality of i(N), the range of i, equals the cardinality of Z (because card(N) = card(Z) and i is one-to-one).

All lack of typeability says is that there is a certain forgetful translation from typed lambda-calculus to untyped lambda-calculus which is not surjective. But its range is Turing-complete, so who cares? The untyped programs outside the image are, at least in some sense, redundant.*

Update:In fact, you can reverse the argument. Suppose my interpreter in TLC+R evaluates terms of type U. Consider the function from untyped programs to typed ones which encodes an untyped program as a typed term of type U. Obviously there are enough. And TLC+R is consistent, so there are inhabited types beside U. So w.r.t. to this translation, I could say, "there are typed terms which are not 'untypeable' (cannot be 'untyped')."

Would that be dishonest? Or do you think that the notion of typeable/ununtypeable is purely an artifact of the choice of translation?

I can write an interpreter for untyped terms in a typed language. And I can implement a type-checker and interpreter in an untyped language. But the former is trivial, while the latter is not.

* "redundant" = (off the top of my head) By completeness, for every terminating, untypeable program M, there is a typeable one M' s.t. M and M' denote the same numerical function and have the same complexity modulo a constant. Since LC is confluent and M is terminating, there exists a program P s.t. M -> P <- M'. Since reduction preserves typeability, it follows that P is typeable. Therefore, every untypeable, terminating program M reduces to a typeable one P. Moreover, every normal form is typeable; it follows that every untypeable, terminating program reduces to a typeable one under call-by-name reduction. (Am I right, anybody? I am pretty surprised there could be such a strong result...)

A jigsaw?

(Am I right, anybody? I am pretty surprised there could be such a strong result...)

I think you have it right, Frank, but I'm not sure it is such a "strong" result, because I think the strength actually comes from the stipulation that you are only considering terminating programs.

It is obvious you can build such a mapping by considering the equivalence classes of terminating programs that have the same normal form. By the completeness of TLC+R, each normal form (i.e. equivalence class modulo normal form) must have at least one representative that is in TLC+R.

You then map the other members of the equivalence class to that member, and you're done.

Obviously, if you allow for non-termination, you end up having a harder time creating your equivalence classes. ;-)

Hi Frank, if you start with the untyped lambda calculus, and then under the type assignment system with intersection types, exactly the strongly normalizing programs are well-typed. (Obviously, this means that typechecking intersection types is undecidable without annotations.)

Time complexity

And it cannot be the case that there are some untyped progams with better (asymptotic) complexity, since you can write an interpreter in TLC+R and only suffer a constant time and space hit.

I can understand the argument for space complexity (the interpreter has a constant size, cf Kolmogorov complexity), but how does it work for time complexity?

An interpreter is a virtual

An interpreter is a kind virtual machine; you can make them with a constant-time overhead there, too.

Let's assume there is a type

Let's assume there is a type U in TLC+R which is something like this Haskell one:

 
data U = Ap U U | Ab (U -> U)


The proposition is: "There is a translation [-] of LC terms into TLC+R terms of type U, a function eval : U -> U, and a constant (natural number) c, such that, for any terminating LC term M which normalizes to N in n steps, eval [M] -> [N] in no more than cn steps."

To prove this, you need to exhibit the translation function and the function eval. Here are some hints.

 [x] = x -- I am fudging a bit here... [\x.N] = Ab (\[x]:U.[N]) [N1 N2] = Ap [N1] [N2] eval (Ap (Ab f) n) = f (eval n) eval (Ab f) = Ab f 

Given the type U, it is easy to show the proposition by induction. You only have to observe that it only takes a constant number of steps to head-reduce the left-hand sides to their right-hand sides; and then you use the induction hypothesis in the first alternative.

But there is a question what the type U is, because I didn't exactly define what I meant by "TLC+R". In fact, what I had in mind was pure typed LC, generated from some set of base types. But that won't work: AFAICT you need at least a restricted version of sums to encode the pattern-matching of the Haskell version. But it should be sufficient to assume TLC plus a type U with these constants and laws:

 Ap : U -> U -> U Ab : (U -> U) -> U kase : (U -> U -> U) -> ((U -> U) -> U) -> U -> U kase f g (Ap u v) = f u v kase f g (Ab h) = g h 

This is just a description of a weak (no eta-law) sum (U * U) + (U -> U) which can only be case-analyzed into another U.

You might object that I have "built in" the capability for dynamic typing in TLC+R by demanding these constants. But they are definable in Haskell; so then you would have to argue that Haskell has built-in dynamic typing. :) Even without datatypes, though, you can define unrestricted weak sums in Haskell, using an encoding due to Jean-Yves Girard:

 -- A+B = forall c. (A -> c) -> (B -> c) -> c inl = \f g -> f inr = \f g -> g kase = \f g x -> x f g 

OK, I will shut up now.

I need a little help

Could you summarize what this means in elementary computer talk? The word terminating came up, does this suggest non-determinate? Non-determinate programs are expressive.

Hank:

Some day when I have time (but I don't) I will write a web page about this, for permanent reference...

People often say that typed languages are less expressive than untyped languages because "the equivalent untyped program" is more flexible: it can take more inputs and produce more outputs, and sometimes this is sensible. (This assumes we have two languages which are exactly the same except that one has type annotations and one doesn't.) I drew attention to the fact that when you say "the equivalent untyped program" you are implicitly assuming there is only one translation of typed programs into untyped programs. There are lots of ways to translate between the two; but the they implicitly assume is the one which erases all the types.

So I said, "this choice of translation is very arbitrary; I can show you a translation in the opposite direction which is just as unfavorable." And I explained that you can translate untyped programs into a typed language by picking a certain type which is "big" enough to encode them all. Lots of types while do. For example, the type of strings, if I encode an untyped language as source text. Text encoding is a very inefficient encoding; we can do much better using other means. But whatever means we choose, we get one type which has as values all untyped programs, or at least, representations of them.

If can now point to this type, which I have called a universal type on other occasions, and say, "Look! I put your whole language in here! Therefore, untyped programs are a subset of typed programs!" This is the same sort of argument as before, but in reverse. I have constructed the translation untyped -> typed in such a way that it looks like there are more typed programs (and so you would expect typing provides more flexibility). But it is disingeneous, because I can also pick other types to encode untyped programs. My point about numbers was that, if you have two infinite sets, you can map one into a subset of the other, even though they are the same size. (For example, there are as many even integers as there are integers, even though they form a subset.)

I also explained how I think this "equivalent program" business is disingenuous. Namely, because, even though typed programs translate to only a subset of untyped programs using that particular translation, it is the case that that subset can express any computational behavior. It is a subset, but it is as big as the the set of ALL programs. There is still the question of whether that subset of programs are as efficient as all the rest. A bit of a counterintuitive point, actually, since nobody ever argues that typed languages are LESS efficient than untyped ones... nevertheless I proved that no program in that subset could possibly be more than a constant factor less efficient than programs outside it.

Marc made a good observation, though; he said that I had restricted my analysis only to terminating programs, that is, ones which don't loop forever. My proof, then, does not cover the case of reactive programs and servers and that sort of thing.

Whew! I'm all talked out now...

Soundness

OK I think I get it now. The original question is a bogus question. I agree and I think it is counter intuitive to think that a typed language is less expressive or for that matter more expressive simply on the basis of static typing. To me expression is an underlying semantics and my ability to perceive that semantics. This seems to be closely related to soundness since soundness is the link between the program and its semantics. This is my amateur opinion after reading wikipedia on soundness. And I do appreciate your taking the time to explain this.

There's an argument for

There's an argument for statically-typed languages being 'more expressive', which is about the static properties the type system can express. IO does, of course, add all kinds of fun and games.

You arn't geeting me in a

You arn't geeting me in a static/dynamic debate. Different languages have different forms and style. I will stick to what I said above.

I'm not trying to. I do

I'm not trying to. I do think it's interesting to compare mechanisms that're available in dynamically-typed languages to express similar things, especially once ideas like staging're thrown into the mix.

Deep expression

In another thread; Common Lisp Error Handling there is a Google tech talk. At the beginning the speaker is talking about architecture (ie theory) and mentions a hypothesis that the theory of a program can't be reconstructed from the code only. If this is true, and I am inclined to agree, we have a real expressiveness problem. The cause of this is basic and needs fundamental analysis. Comparisons are interesting in themselves, but I think we need to get more basic.

Speculation: Programs don't represent the basic theory (or application domain semantics) but only one fragment of it. Doing so requires a knowledge representation of the complete theory and the desired behavior (ie the program we originally couldn't understand) will become one proof or expression in the theory. According to this we should focus on knowledge representation of theories not specific programs in order to get real expressiveness.

I'm not sure I see a

I'm not sure I see a sufficient difference between theories and well-encapsulated libraries? Naturally I'd like to be able to express various laws that might hold for a library...

"Well-encapsulated library" is the problem

Encapsulations leak, and the leaks occur in areas we thought we had abstracted away or areas where we reasoned implicitly from the believed state of the imputs. This is isomorphic to saying that a program's theory is incomplete, in that it can't be recovered from the program.

Naturally I'd like to be able to express various laws that might hold for a library...

Me too, but I get a sinking feeling that those laws would only hold under certain assumptions, both on the domain and the implementation, and that I won't be able to fully enumerate or validate all those assumptions. I might be able to get asymptotically close, though, with increasing investment.

semantics and databases

Theories are semantics; on a computer they probably take the form of a knowledge representation. This might be a data base of rules and facts. A library is what it is. There might be a knowledge representation or representational structure of another kind such as a relational data base. Knowledge representation is a specific strategy for representing semantics or in this case theory. It overlaps and collects other data base or library strategies but has the higher purpose of representing semantics. The Predicate Calculus is a starting point. Given that you can use any other programming strategies that you need to put it together.

Based on your response I think we are headed in a familiar direction. Programmers are usually taught relational database theory etc. Knowledge representation doesn't replace that but simply represents a different point of view, or spin. And comes from a different place (ie logic programming) As I see it they easily fit together.

100% CPU

I often hear this argument that reactive programs are looping forever, but to me it sounds a bit imprecise. Unlike "genuine looping programs", "well-behaved" reactive programs stop responding (I mean, providing information outwards) in finite time after the environment stopped requesting them (providing information inwards). Or should I've used words "ask/tell" instead of "request/response", as a "program stopped responding" reminds me of bad OSes? What about "reaches a quiescent state" vs. "terminates"?

They don't need to stop

We can create a reactive program that accepts two messages, :start: and :stop:. After it receives :start: it starts streaming some media on some output channel until it receives :stop:.

Turing completeness considered harmful

That's pretty close to my own counter-example - calculating ALL digits of pi. After some thought I think I can counter-counter it - as all streams are discrete, all these stream producers can be reformulated in terms of finite response (a single packet or a burst, depending on the protocol) to an external 'event' - and realistic media servers would do something like that anyway - in absence of any flow control they will use timers.

BTW, when a child, I wondered, why processors need a HALT instruction. Then I learned about interrupts... An OS, or any program that runs on bare metal can be seen as a reactive program driven by hardware interrupts - and I do not see any benefits of being able to write code that never reaches a HALT.

So I guess my conjecture is that I did not find any use cases for non-terminating runs.Â°

The corollary being - why care about Turing completeness?
Or even worse - Turing completeness considered harmful.
Indeed, it only has a disadvantage of halting problem, and no (apparent) benefits.

I hope people are not too scared by recent changes in LtU traffic and will provide their arguments.

There's no gray area

I am a fan of non-Turing complete things, but I guess the obvious answer (if I understand what you suggest) is that there's no gray area: you eiother have very restricted systems or you have Turing completeness. There's no easy middle ground (e.g., LC needs just beta-reduction to get you in trouble, the game of Life needs only a couple of seemingly trivial rules to get you there etc.)

I'm not sure I see (for

I'm not sure I see (for example) the Calculus of Inductive Constructions as "very restricted" aside from the lack of IO? IO restrictions being, of course, a whole new ball game.

A new ball game

I am specifically interested in using sub-Turing PL in a reactive setting. While trying to grasp domain theory I've got a strange suspicion that a lot of complexity arises just to be able to model non-termination. And as I am interested in both concurrency and type systems, I wondered, what are the use cases requiring non-termination that cannot be covered by reactive systems with provably terminating transitions.

While we are at this - what are good resources on sub-Turing theory? I've found nothing except Subrecursive Programming Systems: Complexity & Succinctness, which is not freely available. A specific question - is it possible to write an interpreter for a PL P in P if any program of P terminates?

My gut instinct is yes, so

My gut instinct is yes, so long as you can express folds for a sufficiently general value of fold. It might involve some thoroughly tricky proofs at type-level though.

If it is possible at all

If it is possible at all than I am pretty certain that it will involve some thoroughly tricky proofs at type-level, yeah.

And now I'm wondering if

And now I'm wondering if anyone's done a proof of termination for the CoC or CIC in Coq, or the equivalent for a similar system. It'd be one obvious starting point.

Challenge to Epigram?

I suggest we just challenge Conor McBride to implement Epigram in Epigram :-)

Seriously, I will need to strengthen this result (a function in P interpreting any function in P) to a reactive setting (it is 2AM here, so I am not quite sure as whether this is trivial or not).

Epigram in Epigram?

• Epigram is dependently typed.
• Type checking dependent types is Turing complete.
• Epigram programs are strongly normalizing. All Epigram programs terminate.
• Therefore, you can't write Epigram's type checker in Epigram.
• So you can't write Epigram in Epigram.

Congratulations, you just

Congratulations, you just succumbed to a common myth about dependant types! Epigram can't do anything at type level that it can't do at term level, typechecking always terminates.

You're going to run afoul of Godel's theorem

Suppose you have a terminating, functional language L with a rich type system. Now, in L you try to write an interpreter for L. At best, one of two things will happen:

1. You can write an interpreter, but you can't write a proof term that shows that the language the interpreter interprets is actually L, or

2. You can show that your data type of interpreted terms matches the language L, but you can't write an interpreter it.

If you could do both, then you could prove, in L, that all L programs terminate. This is equivalent to proving the consistency of L, in L, which is forbidden by Goedel's second incompleteness theorem.

Thanks, I should've seen

Thanks, I should've seen that one coming. Oh well...

So the real value of CH is

... to transfer Goedel's curse to PLs.

My unbased feeling

I think non-Turing complete languages can solve (albeit likely somewhat more painfully) most problems programmers deal with. I have not much basis for this. However, it is immediately clear that no Turing-incomplete could be a "general" purpose language. If nothing else, it would be incapable of being used for implementing interpreters for most other programming languages (compilers it could do). My impression is that most programmer's don't worry too much about the Halting Problem, but it is everywhere in the realms of language implementation and design.

Incidentally, the Halting Problem was proved without recourse to CH, so we had the problem anyways. Programming is what it is. But look at the bright side, the Halting Problem and related results are often referred to as "employment theorems", without the Halting Problem you wouldn't be needed.

My CPU is a FSM

However, it is immediately clear that no Turing-incomplete could be a "general" purpose language. If nothing else, it would be incapable of being used for implementing interpreters for most other programming languages

Hmm. This might need some more clarification. The machine sitting on my desk is a lowly finite state machine (counting the hard disk, about 2^320E9 states), yet still seems to provide a "good enough" substrate to imperfectly simulate our Turing complete languages.

Aha

I fully agree - at the very least, sub-Turing-complete (any official name for sufficiently strong but weaker than Turing complete?) system can interpret single reduction steps in (at least some) Turing-complete systems. Given an external "clock", this is enough to say that sub-Turing-complete systems can interpret Turing-complete systems.

That's exactly the reason I would give semantics to my would-be sub-Turing-complete PL in terms of semantic contexts, not syntactic ones.

BTW, it looks like this subthread grows completely off-topic, but sounds extremely interesting. I hope it is still very relevant to PLs, so I suggest we start a new thread.

"That's exactly the reason I

"That's exactly the reason I would give semantics to my would-be sub-Turing-complete PL in terms of semantic contexts, not syntactic ones."
If the mapping between the syntax space and the semantic space is one to one then the system will always terminate after searching the entire space. The size of a practical semantic space is likely to be quite small. Also a way to check for the existence of a solution before starting would help.

Thanks for reminder

If the mapping between the syntax space and the semantic space is one to one then the system will always terminate after searching the entire space.

By one-to-one, do you mean injective or bijective?
I believe it is quite unusual for a semantic mapping to be either.
Also, as I mentioned, I explicitly require this mapping to be not surjective (for sub-Turing languages), so there are environments, for which programs must be prepared (i.e., the semantics is given in terms of them) but which are not programs themselves (cannot be expressed syntactically). This is my preferred way to resolve the tension between restricted computational power and the rich world.
By the way, I think I am not the first to propose such approach, I remembered seeing similar idea in Introduction to computability logic:

Our philosophical point of departure is that the agent [top] should be implementable as a computer program, with a fully determined, predictable and computable behavior, while the behavior of [bottom], who represents the environment, â€” a capricious user or the blind forces of nature, â€” can be arbitrary.

Thanks for reminding me of Computability Logic!

The size of a practical semantic space is likely to be quite small.

Sorry, I do not agree. Also, I do not quite see why it matters. When I said about quantifying over all contexts, I meant a way to give a denotational semantics for a program, not an operational one (the execution model), so there is no search over semantic space at runtime.

To me a practical system is a predicate calculus consisting of rules and facts together with its inference engine. In constructing such a system all the "questions" (ie the things to prove) are contained in the system so to speak. A question in the system will succeed, otherwise fail. This is what I had in mind by "one to one". Exceptions to this seem to be systems with full unification that I mentioned below. Because of this I avoid full unification. Others also avoid full unification. For instance the algorithm in Russell and Norvig page 303 looks like one to one. Perhaps this is only for speed? Anyway my practical goal is systems that either terminate with the answer or fail. Is "one to one" unification the answer?

I doubt if this is related to your proposal but thanks for the explanation. I am working on my definition of "one to one".

If you've got (say) full

If you've got (say) full first-order logic, then your search can also loop forever, so you can prove your goal, refute it, or loop. What do you mean by "one to one"?

Unification

To get to the point. I am trying to understand the source of the loop forever behavior. Empirically it seems to be due to full unification since the only time I see it is with full unification and rules with matches that require it. This is only an observation and I would like to confirm this with theory.

Taking another point of view an inference system with one to one unification is just a depth first search. All it does is start at the beginning and search through to the end. There is no mechanism for infinite loops (is there?). Thus the one to one mapping idea.

If the cause of loop forever behavior is unification what do we gain/loose compared to the one to one case? Apparently only the ability to write patterns with two or more instances of a ?var. (ie two or more ?x's for instance in a single pattern) I am still thinking about this but there are ways to avoid it.

What is a logic program

Possible answer to my own question. Loop forever behavior seems to always be an unterminated recursion. But this is always an error in the knowledge (or program). Thus we argue that correct programs always terminate. Nontermination is allowable from a logic point of view but can be eliminated by "correct" programs. It is really a conflict of ideas about what a "logic program" really is?

Limit of FSMs does not equal TMs

For what it's worth, Yuri Gurevich once showed that the limit of FSMs as their limited resources go to infinity is strictly different than a TM (with an infinite tape), in terms of computing power. More specifically, the limit of FSMs is less powerful, if I recall correctly. I could probably dig up the reference, if anyone is interested. (The work was done while he was at the U. of Michigan; he is at MS now.)

More than a feeling

I have the same general "feeling", but I think the win will come from proper composition of general-recursive languages with languages of lesser power. This could allow you to easily prove things about termination, memory, cpu usage etc. at the lower levels while letting a more powerful language orchestrate the top level.

Look at how SQL queries are used in g.p. langauges -- they are not only guaranteed to terminate, but their time and space are calculable at runtime (based on the current data set, even).

Or maybe not

If you could do both, then you could prove, in L, that all L programs terminate. This is equivalent to proving the consistency of L, in L, which is forbidden by Goedel's second incompleteness theorem.

In the beginning I was convinced by your logic, but now I started to doubt. We are not insisting that an interpreter for L in L returns some result for non-terminating programs, are we? It would be ok if it didn't terminate while interpreting non-terminating programs (and it didn't have "to know" or "prove" that such programs do not exist).
Though probably I am just misunderstanding Goedel's theorems.

All L-programs terminate

It would be ok if [a interpreter for L in L] didn't terminate ...

But you started out by stating that all programs in L terminate, so
the interpreter must terminate as well.

Sized types

I am specifically interested in using sub-Turing PL in a reactive setting.

These might interest you:

Proving the Correctness of Reactive Systems Using Sized Types

Recursion and Dynamic Data-structures in Bounded Space: Towards Embedded ML Programming

Can you have nontermination?

Can you have nontermination?

Why would you want to?

The reason I brought this topic up is I want to avoid non-termination in my hypothetical PL, but being careful not to throw out the baby I am looking for use cases that genuinely need non-termination.

Do you have any?

[on edit: I just realized it is not obvious my arguments on this thread started in response to Frank's comment:

Marc made a good observation, though; he said that I had restricted my analysis only to terminating programs, that is, ones which don't loop forever. My proof, then, does not cover the case of reactive programs and servers and that sort of thing.

]

It weas an informative

It weas an informative question, not a challenge...

I think it is hard to do interesting things when the *risk* of nontermination is abolished by the model. I amy be wrong, and my reasons are trivial and not very deep, but that's my gut feeling. Nothing more.

I see

I can understand this intuition of yours.
But think of the interesting things that are enabled by abolishing this risk - for example, recent Frank's conjecture (has it got any name yet?).

One to one unification and termination?

This is a very interesting discussion. I think I know enough about this to say that the risk of non-termination in logic programs and the unpredictable behavior in some cases is what prevents wider use of this techniques. Practical systems must produce results within an expected amount of time. And people like to think they know what their systems are doing.

A related question. Even in AI there seems to be a tendency to use "one to one" unification instead of full unification (ie unification with duplicate ?vars) This means you can't represent certain types of problems. An example would be the inequality system on page 229 of Niles Nilsson's Principles of AI. It does inequalities using rules rather that arithmatic. Can anyone comment on this? A simple example (?x ?x) is a test for equality, it will unify with (a a) but not (a b). But in a practical system you can use an equality test (EQ ?x ?y).

Edit: On a broader note, If the experts could clear up the uncertainty surrounding these issues it would clear the way for a lot of useful practical applications.

Not in the CIC, but in the

Not in the CIC, but in the absence of IO I can't see how non-termination can be a good thing.

Turing completeness is overrated

While I agree that turing completeness isn't necessary, a program that loops forever, producing some effect E has a different (operational) semantics from a program that requires some effect T to produce an effect E.

Also if you require an external event source, you're just introducing another level of indirection, you still have to either model it as a unfold and let it be (possibly) non-terminating, or pre-define the amount of events produced and model it as a fold. Of course you could have an arbitrarily large amount of events, but it'll eventually finish.

Clocks

Also if you require an external event source, you're just introducing another level of indirection, you still have to either model it as a unfold

I had this argument with myself once (while designing a would-be framework for BPM), and came to conclusion that the way out of this situation I prefer is to postualte some processes (sources of events) that cannot be defined in the PL under consideration. That would mean that a semantic mapping is not surjective.

A real-life example of processes inexpressible in PL would be clocks.

Then when giving a may/must testing semantics for the PL, instead of quantifying over all contexts defined as PL "terms with a hole" (syntactic contexts) one would quantify over all semantic contexts (and because of non-surjective semantic mapping this gives more contexts). I have a very limited experience with testing semantics, so take this paragraph with a huge grain of salt.

you can write simple

you can write simple type-safe interpreters (invariant: evaluation cannot go wrong)

The phrase "cannot go wrong" is famous but fuzzy. If I understand correctly, it would only guarantee that a Term a evaluates to a value of type a, no? Would it be possible to encode a stronger statement, i.e., a soundness theorem such as subject reduction into the type of an "eval" function? Has this been done before, or is this not possible?

Edit: Thinking a bit more about it, I realize that this would already be a kind of subject reduction theorem, but it is encoded in a strange (or at least unusual for me) way.

Would it also be possible to encode such a statement if the types of your interpreted language are not in a 1:1 correspondence to the types in your base language, i.e., you would represent types in your interpreted language with a data structure and not with a base language type?

Are rules expressive?

I wonder what you think about rule languages without types. I think of the rules as defining the types implicitely along with the program. Thanks.
Edit: Meant to respond to Chris.

Different kinds of expressiveness

Rules would be expressive in terms of their versatility and application. The types within the rules are implicit - but they still exist and are a part of the rules and their manipulation. However, they would be less expressive in terms of explicitly specifying the underlying types being manipulated. Now, how does one go about expressing the types within that paradigm? Of course, it would be nice if we could gain the expressiveness provided by types, without sacrificing the maleability of the open ended usage of the rules. But we generally have to decide which form of expressiveness is most important - or what tradeoffs am I willing to make.

Or put another way, restrictions may reduce expressiveness along one dimension, while enhancing them along another. Haskell's bottling of state within monads being an example I would point to.

Martin Hofmann

Martin Hofmann wrote several papers about resource-bounded functions.

Andris:

Martin Hofmann wrote several papers about resource-bounded functions.