Applied Type System vs. Epigram

There's been much discussion of Epigram, but I have yet to see much discussion of Applied Type System. There have been a few stories here on LTU, but relatively little discussion. This surprises me as ATS seems quite advanced.

As a relative newcomer to type theory, I'm interested in the relative merits of ATS vs. Epigram vs. any other dependently typed languages that might exist. From my understanding, Epigram is a Pure Type System that seeks to unify types and terms, where ATS distinguishes the statics and dynamics of the language. What benefits and limitations do these approaches have on complexity for both developer and implementor?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


I'm not sure about ATS in particular, but I think it follows a similar route to DML.

DML only has a restricted form of dependent types, added to a conventional language. Epigram is focused squarely on pushing dependent typing and has a lot more power with regards to defining your own datatypes.

Another language is Cayenne. Like DML, Cayenne isn't a completely new language designed around dependent types but an existing language extended with dependent types instead. Whilst type checking always terminates in Epigram, it is undecidable in Cayenne.

I also feel Ontic is important.

Then there are the various proof assistants - Coq, LEGO, etc. I don't have much experience with these. If someone could elaborate on their differences and advantages/disadvantages I would appreciate it.

Relation to DML

I'm not sure about ATS in particular, but I think it follows a similar route to DML.

I think Hongwei Xi, the main researcher behind ATS, was also behind DML, but it's my understanding that ATS was created as an alternative to Pure Type Systems, while retaining its dependent typing power. I could be wrong though, hence the request for clarification. :-)

DML only has a restricted form of dependent types, added to a conventional language.

Yes, I'm aware of this. I've seen a vague reference or two to ATS also supporting "restricted dependent types", but it was never clarified in more laymen's terms.

I also feel Ontic is important.

Indeed, I've heard a few rumblings on Ontic here and there, but it doesn't seem like much work is happening in that direction.

Then there are the various proof assistants - Coq, LEGO, etc. I don't have much experience with these.

ATS has support for combining programming with theorem proving, which is why it seems so strange that it's been discussed so little here. It seems very interesting.


I feel the strongest difference is what kind of programs you can write with which expressiveness in specifications.

  • For DML it is ML with automatic proving for arithmetic expressions in types. The other allow roughly the same expressiveness in specifications.
  • Epigram is powerful just as Coq and LEGO, it is a really dependently typed programming language where proof and algorithmic code are intermingled in an elegant way.
  • ATS separates proving and coding a little more and proposes other paradigms than functional programming. The main difference i see between Epigram and ATS is on the objective. The first proposes a new way of programming while the second integrates all paradigms in an unifying one using dependent types and relying explicitely on the manipulation of proof objects.
  • In Coq (and probably LEGO), you could write the same programs as in Epigram, except it would generally be a nightmare for a number of practical and theoretical reasons. Epigram is a programming language giving you adapted syntax and constructs to program, and a nice typing and editing system. Here tactics to prove/program are part of the language (rec, <=, case, etc...).
    On the other hand, Coq and LEGO are proof assistants, so their main focus is on proving, not on the easy manipulation of terms. That's kind of where Curry-Howard stops for them, you can write terms with simple types easily, but once you want to give a rich specification to a program you're alone. You can use the tactic language to prove arbitrarily complex goals but it's a bad idea to use them for creating programs.

Thanks for the explanation!

Thanks for the explanation! It would seem then, that ATS may be more verbose, but still theoretically as powerful as Epigram. The "backwards-compatible" paradigms are certainly interesting though, if only to see how they're typed, and it should yield an idea of how useful they actually are.

working ATS programs

When browsing these LtU articles about exotic programming languages I often wonder what the programs would look like compared to other languages.

Now the benchmarks game shows a dozen working ATS programs we can see.

Is ATS really "dependently-typed"?

I don't think of it that way. ATS has a clear separation between statics and dynamics, and the statics part may never refer to the dynamics. My definition of "a dependent type system" is based on a cycle in the language's syntactic classes, where some class A (like expressions) is described by some class B (like types), where the syntactic definition of B refers to A. In Coq, for instance, there is just one syntactic class, and it is typed in terms of itself.

While I'm at it, I'd might as well repeat my usual summary of the world of dependent-ish-typed languages. If you will be writing a program with a non-trivial correctness proof that doesn't follow the structure of the program, then Coq is the only practical choice today, as the others force you to do way too much manual writing of proof terms.

"Dependent types"

I use the term "dependent types" more broadly, to refer to any language with fancy type-level data.

For marketing purposes, it's good to have one term for the high-level functionality that the programmer sees: rich type-level data that can be used to specify and verify properties of programs. And "dependent types" seems like as good and historically relevant a term for this as anything else (anyone have counter-proposals)?

The technical methods of providing this functionality are a second-level concern---though of course it's important for researchers to be able to convey a language design quickly, which requires more detailed terminology (e.g., I've heard "phase-sensitive dependent types" for DML-style, or "full spectrum dependency" for Epigram). However, I think the language design space is more complicated than the syntactic condition you described above: For example, if I presented the syntax of the simply typed lambda-calculus with one syntactic category, and then sorted out the difference between expressions and types in the typing judgement, your definition would deem it dependently typed. Alternatively, what if a language allows _some_ programs to be used in types, but not others (e.g., by making a restricting the types at which dependency is allowed)? Or what if a language that allows run-time computation with "type constructors", a static syntactic category? There are lots of variations on restricted forms of dependent types, but we need one word for the overall concept.

It's worth adding...

...that what's going on in ATS is not merely a restriction of what (e.g) Coq or Agda does. The dependent quantifiers in ATS have an intersection-style interpretation, rather than a function-style interpretation, which means that they validate Girard's "shocking isomorphisms" (such as (forall a. A + B) iso to (forall a. A) + (forall a.B)).