Role of Types in Evaluation Strategy

It seems to me that most evaluation strategies are type independent. In other words the evaluation rules are independent of the types yielded by sub-expressions. Is this always the case?

What if we were to write an operational semantics that depended on the type of sub-expressions? Is that a strange thing to do?

For example:

a:fun b:val ::== a(b)
a:val b:fun ::== b(a)

Any help would be much appreciated!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Linear logic and overloading

There are type systems that do have an impact on evaluation order, namely substructural ones such as linear types. The difference between call-by-name, call-by-need, and call-by-value are significant in a linearly typed language.

On the flip side, I mentioned Type Directed Concurrency a while back. It discusses three languages: Full CLL, fCLL and lCLL. fCLL is a subset of lCLL is a subset of Full CLL. However, when going from fCLL to lCLL, no terms or types are added. Instead, the semantics are extended in a way that treats differently typed terms differently. I didn't grasp the novelty of what is done in that paper the first time I read it. (Note, that it too is discussing a linearly typed language.)

In a much more common case though, this is exactly what happens with (ad-hoc) overloading, e.g. overloading in C++ or Haskell type classes.

Thanks Derek

Thanks for the information. I imagine that type-depdendence on the evaluation strategy must make certain types of analysis much more challenging.

My motivation is to make it to make it easier to define DSLs in a general-purpose language. This was inspired by the semantics of Scala which allow any binary function to be used as an infix operator. However, as far as I know Scala doesn't allow prefix or postfix operator definitions.

Small correction...

Actually, Scala somewhat allows prefix and postfix operators. Scala defines a fixed set of prefix operators which may be defined/overridden: +, -, !, and ~. Each of these may be defined by a method like unary_+, so the expression (!e) is translated to e.unary_!.

For postfix operators, any expression (e id) is translated to (, and of course id can be an operator just as well as any other identifier. The spec says: "Postfix operators always have lower precedence than infix operators. E.g. e1 op1 e2 op2 is always equivalent to (e1 op1 e2) op2."

All of this is in section 6.12 of the language spec. Sorry to be such a language lawyer, but I figure you do seem interested... ;)

Law abiding citizens

Sorry to be such a language lawyer

You say that as if it is a bad thing! ;-)

Thank you very much for setting me straight here. I was hoping that someone would jump in if I was making a mistake

Co-inductive types

In a type theory with codata you may very well want to have different evaluation rules in order to ensure termination. In the Calculus of (Co-)Inductive Constructions for instance codata is produced on demand since it would be non terminating otherwise. A simple example in Coq can be made using the co-natural numbers and a notion of infinity.

CoInductive CoNat : Type := 
| Z : CoNat
| S : CoNat -> CoNat.

CoFixpoint inf : CoNat := S(inf).

Here we have a type defined analogously to the natural numbers except which allows infinite objects. "inf" is here defined as an infinite chain of S's.

If we do

Eval compute in inf. 

We just get the response:

     = cofix inf  : CoNat := S inf
     : CoNat

The evaluation strategy is such that we only look at coinductive types as far as they are demanded by terminating fixpoints. You can write a function that looks at the first n values of inf for instance and do things with them, but the system will not evaluate all of it as this would yield non-termination.


That is kind of cool, but completely over my head. I know what fixpoints are, I understand some of what is going on here (describing natural numbers using induction), but I no nothing of codata and conatural numbers.

I was about to go cry in a corner and eat some ice cream, feeling overwhelmed by my naivete but then I googled "conatural numbers" and "co-natural numbers" and found 14 hit and 12 hits respectively. It makes me feel a bit better to know that I am at least on par with Google. So that gives me enough confidence to ask the question: just what the heck are you talking about? :-)

Despite my lame attempt at humour, I realize that I had spent time doing graduate studies in type theory or logic systems that this kind of thing would probably be obvious. I appreciate that you bothered to take the time to respond, and I can at least tell that something very interesting is going on here.

What specifically is preventing the Coq system from performing eager evaluation? Sorry, that I have no knowledge of Coq, except that it is a theorem prover. Is CoInductive and CoFixpoint both Coq operations that perform on demand constructions? Is this a form of lazy evaluation being performed at the type level?

Thanks for your patience.

codata & friends

sigfpe has a great article about codata. If you read it carefully it has links to many of the related subjects, but one in particular that's worth noticing is A Tutorial on (Co)Algebras and (Co)Induction.

Re the original question, you might find some of the work on type-directed partial evaluation interesting, such as Danvy's paper.

Thanks Anton, that is an

Thanks Anton, that is an awesome link [re: Sigfpe's article]. This is really interesting to me because I was dabbling with the problems of discrminating between finite and infinite lists in the Cat language without knowing what I was doing. This is a huge help.

[edit: the danvy paper is also very helpful to me, thanks!]

Despite my lame attempt at

Despite my lame attempt at humour, I realize that I had spent time doing graduate studies in type theory or logic systems that this kind of thing would probably be obvious.

Maybe, but it's hardly necessary.

Sorry for the Jargon

This isn't actually over your head, it's just laden with terminology. An introduction to some of these ideas is given in Total Functional Programming which was discussed earlier.

If you have functions which must terminate then you have to be careful about how data is constructed. In the case of a definition (in psuedo haskell) like:

inf = S (inf)     -- The S just means (1+)

We can't use this as data to a program that is suppose to terminate by decreasing it's argument since the argument never gets any smaller. For example, supposing we have a terminating function, times_two defined thus:

times_two x = case x of 
                Z => Z
                S x' => S (S (times_two x'))

Here we pattern match against the argument, stripping one off, and then adding 2 to the recursive function called on the decreased argument.

If we distinguish between data and codata, we can be sure that times_two terminates when passed data, since data can only be finitely constructed. (times_two inf) would not terminate, as it would always have another S to strip off. We therefore restrict times_two to *only* be called with data. We can't be allowed to pass it codata.

The evaluation strategy for data can be eager, but with codata we are lazy. We don't compute it at all except by destructuring/case-matching. For instance, we can have two types of Nats, one coinductive (potentially infinite) and one inductive (finitely constructed).

Inductive Nat : Type := 
| Z : Nat
| S : Nat -> Nat. 

CoInductive CoNat : Type := 
| CZ : CoNat
| CS : CoNat -> CoNat. 

And the following program:

CoFixpoint inf := CS inf.

Fixpoint less_than (y:CoNat) (x:Nat) := 
  match x with 
    | Z => False 
    | S x' => match y with 
                | CZ => True 
                | CS y' => less_than y' x'

Here less_than is always guaranteed to terminate since it decreases it's finitely constructed argument x. It also inspects codata by destructuring.

The following example would not terminate in an eager language without a distinction between data and codata:

Eval compute in (less_than inf (S (S Z)))

This works perfectly fine in coq however since we don't bother evaluating inf until it is destructured, and then only as far as we need to. This "as far as we need to" leads to a restriction on the way codata is constructed. In the case of data we have to make something smaller as the argument to each recursive call. In the case of codata we have to make something larger *around* each recursive call.

In the example with inf, we are always adding a constructor CS. This means that we'll always have something to finitely destruct without having to make an infinite number of recursive calls before getting it. This is a "productivity" condition which is enforced by "guarding" against recursive calls that don't exist inside constructors of the coinductive return type.

Is this clearer?


It occurred to me after writing this that times_two would be valid either as a fixpoint or as a cofixpoint. If it is passed codata, then it returns codata since it is productive. If it is passed data it returns data. Maybe there is some way to allow a kind of polymorphism of data/codata that is still safe.

There is no difference in the definition of the naturals and the conaturals other than the flag that they are data or codata. The same is true with lists and potentially terminating streams. It is easy to write an addition function that is both structurally recursive and productive. Wouldn't it be nice to be able to re-use code for either purpose in a safe way?

Wouldn't it be nice

I'm only just learning this, but it sure seems to me like it would indeed be nice.


Thanks a lot for your explanation, it is very helpful.

A logical account of evaluation order

This isn't what you're talking about in the original question (making the operational semantics use the types of terms as tags for determining what to do in evaluation---why not just call a fig a fig and make the tags explicit if that's what you want?), but recently, a bunch of people have explored how a logical idea called "focusing" can be used to give a logical explanation of operational phenomena. E.g., normally we use the same type system to describe eager pairs (as in ML, whose components are values) and lazy pairs (as in Haskell, whose components are unevaluated expressions). But by refining the logic underlying the type system, you can separate these two ideas as two different types in the same language, so both can coexist. Noam Zeilberger's journal paper is a good place to start.


why not just call a fig a fig and make the tags explicit if that's what you want?

It's because I am very green in the field of semantics I am not clear how to make the tags explicit. I appreciate the link though!

It's because I am very green

It's because I am very green in the field of semantics I am not clear how to make the tags explicit.

I think he means just define an algebraic datatype and pattern match on it, ie. type Fruit = Fig | Orange | ...