Lambda the Ultimate

inactiveTopic Every Language War Ever
started 2/28/2004; 12:33:08 PM - last post 3/9/2004; 10:48:18 PM
Dan Shappir - Every Language War Ever  blueArrow
2/28/2004; 12:33:08 PM (reads: 10863, responses: 20)
Every Language War Ever
Idiot 1: Your Favorite Language is bad at doing arbitrarily chosen task X, which it was never designed to do. This just happens to be a field in which My Favorite Language excels.

Idiot 2: Ah yes, but YFL is bad at doing Y, which MFL is great at.

Are we all idiots? ;-)
Posted to fun by Dan Shappir on 2/28/04; 12:33:46 PM

Ehud Lamm - Re: Every Language War Ever  blueArrow
2/28/2004; 3:10:04 PM (reads: 911, responses: 0)
It's interesting to try and reflect on the moment you finally understood language wars are useless.

I see it happenning to soem of my students. Up until some point they try to convince their friends (who haven't taken a PL course) about side effects, h.o.f, or whatever. And then they realize that the best advice is simply to start by learning the essential concepts.

Sergey Goldgaber - Re: Every Language War Ever  blueArrow
2/29/2004; 4:56:18 AM (reads: 861, responses: 1)
When everyone starts using MFL these wars will end. :)

Ehud Lamm - Re: Every Language War Ever  blueArrow
2/29/2004; 5:11:27 AM (reads: 855, responses: 0)
We once had a nice discussion of language myths. I guess the hardest myth to erdicate is the single language myth... Even when everyone starts using YFL, they will not be using it exclusively...

What you really want is that everyone starts to use your favorite languages, and apply each language in excatly the same circumstances as you do. Now isn't that asking for too much?!

Frank Atanassow - Re: Every Language War Ever  blueArrow
2/29/2004; 9:41:12 AM (reads: 802, responses: 1)
It seems to me that all language wars typically spiral until they reach "all programming languages are Turing-complete and therefore equal". Invoking this argument is a bit like Godwin's Law---it ends the debate immediately. Because most people seem to sense that there are genuine differences between languages, yet cannot explain why they feel this way, a language war is kind of like a nuclear standoff---no one wants to drop the bomb, but they know that, unless they agree to disagree, ultimately they will have to. To me, this is the epitome of a pointless debate.

But just because most people cannot make solid arguments about the superiority of one programming language (or feature) over another doesn't mean that such arguments don't exist. There are genuine semantic differences between languages, and they have to do with static and dynamic expressivity, and issues like soundness. I suppose it is rare for one language to be superior to another in every way, but we can certainly make unambiguous and objectively verifiable statements about certain features.

Untyped languages (say, Scheme) have zero static expressivity, and a polymorphically typed language (say, ML) has more static expressivity than a monomorphically typed (say, C) one. All other things being equal, a language with higher-order functions (say, Scheme) is more dynamically expressive than one without (say, C); similarly for first-class continuations. Dynamic expressivity can be formalized syntactically as macro-definability, and semantically via full and faithful embeddings. If people used concepts like these to formulate their arguments, language debates would become meaningful.

Syntax is also a formalizable difference, but since most languages fall into the same computational class (deterministic context-free languages), there is not much point in arguing about it in practice. Sadly, few people have been intellectually equipped to recognize anything about a programming language besides its syntax, and since almost all the popular languages differ only in their syntax, 90% of arguments tend to focus on that issue and other non-technical issues like popularity, support and appeal. And the other 10% are usually based on anecdotal evidence.

Oh well. At least it means I have job security.

Ehud Lamm - Re: Every Language War Ever  blueArrow
2/29/2004; 12:52:23 PM (reads: 775, responses: 0)
Frank, it seems you hang out with a very intelligent crowd. In my experience most arguments about programming languages tend to boil down to eveyone is using Java/VS, thus this is the best language/tool (regardless what the subject of the discussion is, e.g, modularity or type systems).

If people used concepts like these to formulate their arguments, language debates would become meaningful.

Ah. But when the happens they cease to be language wars and become informed debate. My point was that when this happens almost all arguments vanish. In the classic endless debates we were discussing, at least one of the participants doesn't know the concepts required for meaningful discussion of programming languages. When he finally leanrs them, if ever, he is doomed to repeat the usless debate with yet another uneducated programmer. That's the circle of life...

And oh, you might be happy to know that we are not alone.

DocOlczyk - Re: Every Language War Ever  blueArrow
2/29/2004; 5:33:13 PM (reads: 722, responses: 1)
It seems to me that there are more important things in a language than language features. These are the things that are part of the environment for the language.

As the most prominent such example I use Smalltalk. In each Smalltalk environment, teh editors don't work very well. The syntax highlighting is crap. Autoindentation is horrid if present. Yada Yada Yada. It might be possible to write an emacs mode for Smalltalk but Smalltalk is not very good at integrating with external editors.

For me the help that a good editor is important, but it is not for others. However they too find the environment, which is strongly specified, too restrictive. If Smalltalk would loosen up and let people develope the way that they want, instead of forcing people to program the way that Smalltalkers want, they would have a bigger following.

Some things that reflect on the environment are: automatic code generation ( ie make or some substitute ) because people like to go grab a cup of coffee while recompilation is going on, a simple transparent way of installing third party modules especially in a seperate directory, a way of keeping your libraries in directories seperate from your project. You should be able to install a project anyplace you like on your system and allow it to compile.

Mark Evans - Re: Every Language War Ever  blueArrow
2/29/2004; 6:28:18 PM (reads: 708, responses: 0)

Right you are about syntax, Frank. The subject is wearisome. About type systems, have you read Oleg's little essay?

This paper was originally submitted as a small observation to the eternal discussion on relative merits of static and dynamic type systems. The paper aimed to show what an ultimate type system looks like and how it converges to a pure dynamic type system.

What are your remarks about this presentation?

andrew cooke - Re: Every Language War Ever  blueArrow
3/1/2004; 4:49:28 AM (reads: 638, responses: 0)
more important things in a language than language features

and community support (eg: will the moderator of ocaml-beginners please get off their fat backside and approve me...)

Frank Atanassow - Re: Every Language War Ever  blueArrow
3/2/2004; 10:29:59 AM (reads: 464, responses: 1)
Mark: About type systems, have you read Oleg's little essay?

Yes, and I prepared a long post about it, but lemme just summarize instead.

First, I understand that this sort of type-level computation seems miraculous to most programmers, but I assure you it is bread-and-butter in the FP sector of the PLT community, where we have long known about systems like F-omega and Martin-Löf Type Theory.

Second, again, it may seem remarkable that you can catch a division-by-zero error statically, but frankly I don't think it's such a big deal. In a system like Oleg's, you can only catch such errors when they involve completely `static values'; it would be impossible, for example, to write a total function which divides two numbers supplied from standard input. (Well, there are ways, but...)

Furthermore, if you try to actually use such a division type function in practice, you will see that it quickly leads to very, very complex types. For example, the fact that the denominator must be non-zero in the function roots(a,b,c) which computes the roots of a quadratic equation must be propagated backward as a constraint on the argument a. In this way the types can get arbitrarily complex.

I'm not saying that this is all necessarily a bad thing; I'm just observing that most programmers who read the article will probably not anticipate such complexity. In Oleg's system information can only flow from the static to dynamic phases; most people (mind you, not me, necessarily) want it to flow the other direction as well. For that, you need dependent types, which C++ templates cannot emulate.

Third, Oleg's vision of the `ultimate' type system is not mine. As I have said before, I think equality is the least interesting and useful notion of identity on types: isomorphism is more important, because it identifies types which `look different' but `behave the same'. From that point of view, all of Oleg's types are trivial: every one-element set is isomorphic to any other.

In the type equality paradigm, one `imposes' what I regard as artificial differences on values; thus, you say by means of type annotation that here I want to regard this value as having this type, while here I want to regard it as having this type. So there is something imperative about type equality.

In the type iso paradigm, you have rather two types, and a function (a value) between them, and because that function is an iso you regard it as a witness that the two types are essentially interchangeable. Thus you discover identities rather than impose differences. So there is something declarative, or at least structural, about type isomorphism, and in the long run I think it will turn out to be preferable for the same reasons that declarative languages are preferable to imperative ones, and structural typing is preferable to nominal typing: you get compositionality, etc.

Fourth, Oleg writes:

One of the paradoxical issues in the recurring debate about types is Robert Harper's thesis that dynamic typing is a particular case of a static typing.

Well, it's not a thesis, it's a fact, and I doubt it is due to Harper; I think it goes back to Dana Scott.

BTW, Oleg, there is a small bug in the type addition section: it will fail on a + 0. You either need to add a case a + 0 = a, or change the successor case:

(a+1) + (b+1) = a + (b + 1 + 1)

to:

(a + 1) + b = a + (b + 1)

andrew cooke - Re: Every Language War Ever  blueArrow
3/2/2004; 10:50:38 AM (reads: 462, responses: 0)
Vaguely related to all this, how do I define a type in Haskell or ML that reflects (say) the size of an array? I was trying to work out how to do this last night and couldn't see a simple solution - am I missing something obvious? I can't see how to get an numerical value "into" the type system (I really don't want to start defining numbers in terms of zero and sucessor :o).

Of course, I can explicitly write a type that is restricted to a particular size, but I was hoping for a more general solution (I naively expected I could have something that was "poly-size-ic"). At the moment I have code with arrays arrays of fixed types whose "constructor" function takes a numeric value that gives the (fixed) size. There's nothing in the type system that stops me mixing up arrays of one size with arrays of another.

Mark Evans - Re: Every Language War Ever  blueArrow
3/2/2004; 1:50:27 PM (reads: 429, responses: 1)

Frank: For that, you need dependent types, which C++ templates cannot emulate.

Oleg: The present paper may therefore serve as a curious example, of an ability of a C++ compiler to ... support powerful dependent type systems.

Who is right here?

Frank: ...type-level computation seems miraculous to most programmers... if you try to actually use such a division type function in practice...Oleg's vision of the 'ultimate' type system is not mine

Oleg's paper isn't meant to induce a wow-response among non-cognoscenti, or propose a serious type system, but to explore the antipodes and return with a proposition: "The optimal approach seems to be to do some computation at compile time ... and leave more tenuous checks and the rest of the computation to run-time. As we know, the middle ground is the most controversial position." Oleg's use of the term 'ultimate' is not a quality metric, but a measure of technical extremity. For quality he uses the word 'optimal.'

I don't agree or disagree with anyone here, I am just trying to clarify the position statements and technical facts, whatever they are. Thanks for the commentary, Frank; and for the paper, Oleg.

Dave Herman - Re: Every Language War Ever  blueArrow
3/3/2004; 6:46:57 AM (reads: 376, responses: 0)
Who is right here?

I believe it's a minor terminological issue. Change Frank's statement to something like "...you need [more general] dependent types..." to resolve the apparent contradiction.

On the one hand, a system that allows any kind of mixing of value-like terms and type-like terms can be seen as a restricted form of dependent types (e.g., Dependent ML). On the other hand, a more general dependent type system allows all sorts of unrestricted mixing of value and type terms, in ways you can't do with C++ templates. Loosening the restrictions may (not must!) ruin the possibility of maintaining a phase distinction.

Frank Atanassow - Re: Every Language War Ever  blueArrow
3/3/2004; 7:56:22 AM (reads: 366, responses: 3)
andrew: Vaguely related to all this, how do I define a type in Haskell or ML that reflects (say) the size of an array?

I am not exactly sure what you want here, but it sounds as if you think this problem has an everday solution, and it doesn't. In an ML-like type system, an n-fold product is essentially a type of arrays of fixed size. Here `essentially' means `is isomorphic to'. Presumably you don't want to mention the type of elements n times:

(Elem, Elem, Elem, ...)

but rather factor it out:

Array Elem n

where n is some encoding of the number n. The easiest way to code the number is in base one using zeros and succs; I think this will work:

data Array elem n = Array (n elem)
data Z elem = Z
data S n elem = S elem (n elem)

-- below, `=' means `iso' -- Array Int 2 -- = Array Int (S (S Z)) -- = (S (S Z)) Int -- = (Elem, (S Z) Elem) -- = (Elem, (Elem, (Z Elem))) -- = (Elem, (Elem, ())) -- = (Elem, Elem)

(I use this trick to handle repetitions in XML Schema in my type isos paper.) But you said you don't want to mess around with unary. Matthias Blume does something similar with base ten in this paper:

No-Longer-Foreign: Teaching an ML compiler to speak C "natively"
Matthias Blume
http://people.cs.uchicago.edu/~blume/papers/nlffi.pdf

See the section on array dimensions.

Probably you want to write functions which are polymorphic in the array dimensions. That will require some generic programming. You can do it directly, by using type classes to do induction over types (easiest with base one), or via Generic Haskell, or via Ralf Hinze's lightweight `embedding' of Generic Haskell in Haskell:

Generics for the masses
Ralf Hinze
unpublished draft Feb 2004 (email him, or me—I have a copy)

Also, you mention "arrays [of] arrays of fixed types". There is a paper about square matrices expressed as nested datatypes which might be useful to you.

From Fast Exonentiation to Square Matrices: An Adventure in Types
Chris Okasaki
In Proc. ICFP '99.
http://www.eecs.usma.edu/Personnel/okasaki/icfp99.ps

I think this was also a JFP Functional Pearl.

Mark: Who is right here?

I understand `dependent type system' to mean a type system in which types may depend on values; other people may construe it differently.

Oleg's use of the term 'ultimate' is not a quality metric, but a measure of technical extremity.

That's not how I read it, but you may be right. If so, then I would have to say I just don't find type-level computation very interesting unless there is more communication between the type and value levels, as there is in a dependent type system (where information flows from the dynamic to static phases) or in a type iso system (where the values denote reductions on types). Without that sort of communication, a type-level computation is just syntactic sugar for a trivial sort of metaprogramming, where one program outputs another,consisting of both values and types, but only does computations on the types.

I guess my main complaint is with quotes like this one:

Thus the extreme static typing looks identically to dynamic typing, only with compile- and run-time phases reversed.

because it seems to claim an equivalence (or duality, I guess) between `extreme static typing' and dynamic typing, but I don't think there is one, unless you define `extreme static typing' specifically so that the claim holds.

For example, I don't think the language C++ types (regarded as values, with templates regarded as functions) is an untyped language. That would mean that the compiler should allow, say, Div to be applied to any number of type arguments. Is it the case? (I don't know C++ templates well enough to answer.)

Even if it does, the point of the paper seems not to be C++ templates themselves, but rather the notion of static type computation. Now, if you accept that static typing is good, then you would naturally want to prevent situations like applying Div to more or less than two arguments, that is, you would assign a type to each type (a kind). This is what Haskell does; if I write:

type Div m n = ...

then Div :: * -> * -> *. So this is not dynamic typing in any sense, because there is a phase before the static computation phase, which ensures the types themselves are well-formed; so, there cannot even be any (kind) errors in the static phase.

So, if there is any sort of equivalence between dynamic typing and `extreme static typing', it depends on what is IMO a pretty self-serving definition of `extreme static typing'.

Update: Added some code, fixed Okasaki's paper's publication date, added a publicly accessible link to a PS version of it

Ehud Lamm - Re: Every Language War Ever  blueArrow
3/3/2004; 9:11:32 AM (reads: 357, responses: 2)
Generics for the masses. Ralf Hinze

Hey! We want to link to this from the home page. We need a URL..

Frank Atanassow - Re: Every Language War Ever  blueArrow
3/3/2004; 10:31:47 AM (reads: 335, responses: 1)
There is no URL yet; it's still in draft form.

Ehud Lamm - Re: Every Language War Ever  blueArrow
3/3/2004; 10:44:46 AM (reads: 337, responses: 0)
I got that, but people do put drafts on the web you know... (Anyway, I'd love to see the paper).

Oleg - Re: Every Language War Ever  blueArrow
3/5/2004; 7:15:39 PM (reads: 200, responses: 1)
Frank:
In Oleg's system information can only flow from the static to dynamic phases; most people (mind you, not me, necessarily) want it to flow the other direction as well. For that, you need dependent types, which C++ templates cannot emulate.

Actually, we don't need dependent types: higher-ranked types suffice. We can write pure-functional Haskell code that statically guarantees, for example, that functions head and tail will be applied to non-empty lists -- even for lists that are allocated dynamically. It is possible to not only cast a statically-sized vector to a generic one (C and Fortran do that automatically on argument passing) -- but it is possible to do the converse. Of course in the latter case the "static" size cannot precisely be known statically, but a static constraint can -- which is enough to assure the safety of the head and tail operations, for example. I have Haskell code to prove it, but I'd like to postpone announcing it publicly.

Frank:

Furthermore, if you try to actually use such a division type function in practice, you will see that it quickly leads to very, very complex types. For example, the fact that the denominator must be non-zero in the function roots(a,b,c) which computes the roots of a quadratic equation must be propagated backward as a constraint on the argument a. In this way the types can get arbitrarily complex.

True -- if we do it literally as you suggest. But there is a simpler way: introduce a witness and recover a static constraint from a dynamic value. A vector element accessor in Java, for example, first witnesses that the index is within the range, and then goes ahead and fetches the desired element. In Java, witnessing is done on each element access and is therefore, inefficient. A more efficient approach is to do a run-time check once, at the entrance to a large section of code. For example: on_zero val f1 f2 where the continuation f1 is invoked when val is non-zero, and the continuation f2 is invoked when val is zero. Although we can't know the value val statically, we can know (from the code of non_zero) that f1 will be invoked only when val is non-zero. We can express this constraint in the type of f1. Therefore, all code that runs inside f1 knows that val, whatever it is, is non-zero. We know that at compile-time. Therefore, we statically know that any division by val in the f1 branch is safe. Obviously, on_zero requires a higher-ranked type. It is possible in Haskell. We should remark that a variation of the technique is possible in Java: casting of Object into a more precise reference type is witnessing, a run-time check. However, once we passed it, we are certain about the static type of a value, and we can avoid any further run-time type identification.

Frank:

Third, Oleg's vision of the `ultimate' type system is not mine. As I have said before, I think equality is the least interesting and useful notion of identity on types: isomorphism is more important, because it identifies types which `look different' but `behave the same'. From that point of view, all of Oleg's types are trivial: every one-element set is isomorphic to any other.

In the code that I just finished, it is possible to express not only equality but arbitrary constraints on types (e.g., so-and-so relation among the types must hold). The relation may include existentially-quantified variables and may be inductively defined. In my example, the types in question are isomorphic to non-negative integers (in decimal notation), and relations are arithmetical equalities and inequalities.

Frank:

BTW, Oleg, there is a small bug in the type addition section: it will fail on a + 0. You either need to add a case a + 0 = a, or change the successor case:

Actually, there is no bug. The case that you point out already exists

template<typename T>
struct Add<T, Zero> {
  typedef T sum;
};

It bears no comment though. I'll add one.

Andrew Cooke wrote:

Vaguely related to all this, how do I define a type in Haskell or ML that reflects (say) the size of an array? I was trying to work out how to do this last night and couldn't see a simple solution - am I missing something obvious? I can't see how to get an numerical value "into" the type system (I really don't want to start defining numbers in terms of zero and successor :o).

The code that I mentioned does exactly that -- and more. For an old version, you may want to refer to
http://pobox.com/~oleg/ftp/Haskell/number-parameterized-types.html

BTW, that page shows how to do addition on decimal types -- something that Matthias Blume has mentioned in his paper as being likely unattainable in SML. The CardSum relation in the new code can do not only addition but subtraction of arbitrary precision decimal types.

Frank Atanassow - Re: Every Language War Ever  blueArrow
3/8/2004; 7:46:22 AM (reads: 132, responses: 0)
Actually, we don't need dependent types: higher-ranked types suffice. We can write pure-functional Haskell code that statically guarantees, for example, that functions head and tail will be applied to non-empty lists -- even for lists that are allocated dynamically. It is possible to not only cast a statically-sized vector to a generic one (C and Fortran do that automatically on argument passing) -- but it is possible to do the converse. Of course in the latter case the "static" size cannot precisely be known statically, but a static constraint can -- which is enough to assure the safety of the head and tail operations, for example. I have Haskell code to prove it, but I'd like to postpone announcing it publicly.

Well, of course it is hard for me to comment on something I have never seen. I know there are techniques for guaranteeing certain kinds of invariants statically, and with higher-rank types in particular. In order to write generic code for the sized arrays representation I posted above one does something of this sort. However I'm pretty sure that there is a fundamental difference between this sort of thing and `dependent types', in the sense of types which are parametrized by values.

But there is a simpler way: introduce a witness and recover a static constraint from a dynamic value.

You mean like Altenkirch and McBride's verify operator?

A vector element accessor in Java, for example, first witnesses that the index is within the range, and then goes ahead and fetches the desired element.

In what sense is this a witness? My idea of a `vector element accessor' is a function which might generate a witness (or fail), but not a witness itself.

For example: on_zero val f1 f2 where the continuation f1 is invoked when val is non-zero, and the continuation f2 is invoked when val is zero. Although we can't know the value val statically, we can know (from the code of non_zero) that f1 will be invoked only when val is non-zero. We can express this constraint in the type of f1. Therefore, all code that runs inside f1 knows that val, whatever it is, is non-zero. We know that at compile-time. Therefore, we statically know that any division by val in the f1 branch is safe.

How is this different from a case expression (or the maybe function in Haskell)?

on_zero :: Nat -> (Nat -> a) -> a -> a
on_zero val f1 f2 =
  case val of
    0 -> f2 -- no need to pass anything, since we know val==0
n -> f1 (n-1) -- use iso Nat + 1 =~ Nat

Obviously, on_zero requires a higher-ranked type.

I don''t see why. What is the type of your division function?

In the code that I just finished, it is possible to express not only equality but arbitrary constraints on types (e.g., so-and-so relation among the types must hold). The relation may include existentially-quantified variables and may be inductively defined. In my example, the types in question are isomorphic to non-negative integers (in decimal notation), and relations are arithmetical equalities and inequalities.

Do you really mean constraints on types or on values of the types? For example, can you define the type of homomorphisms of monoids (functions obeying associative and identity laws)?

Oleg - Re: Every Language War Ever  blueArrow
3/9/2004; 9:24:24 PM (reads: 79, responses: 1)
For example: on_zero val f1 f2 where the continuation f1 is invoked when val is non-zero, and the continuation f2 is invoked when val is zero. Although we can't know the value val statically, we can know (from the code of non_zero) that f1 will be invoked only when val is non-zero. We can express this constraint in the type of f1. Therefore, all code that runs inside f1 knows that val, whatever it is, is non-zero. We know that at compile-time. Therefore, we statically know that any division by val in the f1 branch is safe.

How is this different from a case expression (or the maybe function in Haskell)?

on_zero :: Nat -> (Nat -> a) -> a -> a
on_zero val f1 f2 =
   case val of
     0 -> f2
     n -> f1 n

The type of the on_zero function above gives no indication that f1 is invoked only when the value in non-zero. We have to keep that in mind. What if the first clause was written "42 -> f2" by mistake. Then f1 can be invoked when val is zero. So, if f1 does some division, it'd better check the divisor, or risk an error.

In my case, vhead has the type

vhead:: (Card size, Card size1, CardSum (D1 Sz) size1 size) => 
        (Vec size a) -> Vec (D1 Sz) a
which says that vhead can be applied only to non-empty vectors. That is, there must exist such a cardinal number size1 so that adding 1 to it will produce the size of the input vector. The compiler will make sure that vhead is not applied to an empty vector. So, vhead may safely elide any range checks. The following function plays the role of on_zero:
num2cn:: Int -> 
         (forall c cp1. (Card c,Card cp1,CardSum (D1 Sz) c cp1) => 
          c -> cp1 -> w) -> w
As we can see, the positiveness of the value cp1 is written in the type. The typechecker will see to it that num2cn's code indeed assures that condition. The typechecker can propagate the positiveness condition throughout the code. Eventually we have to make a run-time check of a sort that the typechecker cannot guarantee. However, the size of that trusted kernel is extremely small (only two simple functions). If we're sure of the trusted kernel and of the typechecker, we are positive therefore that our code shall never attempt to take the head or the tail of an empty vector. Therefore, we can safely elide tag checks.

Do you really mean constraints on types or on values of the types? For example, can you define the type of homomorphisms of monoids (functions obeying associative and identity laws)?
The constraints are indeed on types -- and are apparent to the typechecker. If you can define a typeclass Monoid (which is defined already), then that constraint can be used just as CardSum above.

Jim Apple - Re: Every Language War Ever  blueArrow
3/9/2004; 10:48:18 PM (reads: 74, responses: 0)
I'm not sure how this fits in with the present discussion, but I don't see

vhead:: (Card size, Card size1, CardSum (D1 Sz) size1 size) => (Vec size a) -> Vec (D1 Sz) a

as a dependent type: its type does not depend on the values of its arguments. the Dependent ML distribution has the following:

fun ack m n = if m = 0 then n+1 else if n = 0 then ack (m-1) 1 else ack (m-1) (ack m (n-1)) withtype {i:nat,j:nat} <i,j> => int(i) -> int(j) -> [k:nat] int(k)

This does not calculate the value of ack at compile time, but it can guarentee at compile time that ack terminates.

The <i,j> is a programmer guarentee that i < x || (i == x && j < y) implies ack i j < ack x y.