archives

Please help find a paper/tutorial

Hi,

I wanted to reread a paper that discusses how to extend data types in Haskell in a flexible way.
The paper was motivated by an evaluator example that can evaluate additions.
Suppose now multiplication expressions also need to be supported, then every function that looks into the data type that represents expressions must be changed.
The idea of this paper was to introduce an extra indirection layer and define recursive types in a roundabout way. For example natural numbers can be defined as:

newtype Mu f = In (f (Mu f))
data NatF b = Zero | Succ b
type Nat = Mu NatF

Now 0 is encoded as In Zero, 1 as In $ Succ (In Zero), etc.

The paper then goes on talking about building higher-order functions in this more flexible framework.

I'm not looking for the original papers on catamorphisms and catamorphisms.
This paper is rather a tutorial that builds on top of those ideas.
I couldn't recall the title or the author.
I thought it was in functional pearls but couldn't find it.

Thanks for your help.

Learning language design

I have been very curious about how language design is done.So,I decided to read up on it.I have started with "Programming Language Pragmatics".And I would say,just reading the first 2 chapters have given me lot of insight into scanners,parsers(LL\LR),context-free grammars,production etc.I wanted to know,how should I proceed? There are topics which I would like to explore in detail.Or,should I just first read the book completely?
I also have book by Wirth on compiler construction.And,it seems its a nice practical book.So,I am bit confused which path to take now.?
My initial goal is to build interpreters in different languages.Then,create a small domain specific language,just for fun.

New Members

I am glad to see many new members joining the LtU daily.

This is a short reminder to all the new users to please read the FAQ and policy documents, and use LtU for the intended purposes of the site (the LtU spirit page may also be of interest). As the community gets larger it becomes more important to keep in mind the shared interests that bring most people to the site.

I also recommend reading the getting start thread (linked from the FAQ), which contains many useful reading suggestions, as well as the various other pages linked to from the navigation bar on the left.

As always old time members are urged to assist the newer members and make them feel welcome to our community.

Why functional programming matters

I am reading the paper.But I am lost a bit,by this piece-
"sum = reduce add 0".How author arrived to this statement? What it means? And later,
"(reduce f x) nil = x
(reduce f x) (cons a l) = f a ((reduce f x) l)
Here we have written brackets around (reduce f x) to make it clear that it replaces sum." Again,I am not understanding whats happening.
"A function of 3 arguments such as reduce, applied to
only 2 is taken to be a function of the one remaining argument" How reduce is function of 3 arguments?

Software Craftsmanship: Apprentice to Journeyman

O'Reilly is hosting a collaborative book/wiki called Software Craftsmanship: Apprentice to Journeyman. It's structured as a series of "recipes" on how to approach different aspects of software development.

Foundations for Structured Programming with GADTs

Foundations for Structured Programming with GADTs, Patricia Johann and Neil Ghani. POPL 2008.

GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of datatypes as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics for GADTs to derive expressive and principled tools -- analogous to the well-known and widely-used ones for algebraic and nested data types -- for reasoning about, programming with, and improving the performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to modify the notion of a higher-order functor so that GADTs can be seen as carriers of initial algebras of higher-order functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for simpler-but-equivalent ones for which initial algebra semantics can be derived.

I found this to be a really interesting paper, because the work had an opposite effect on my opinions from what I expected. Usually, when I see a paper give a really clean semantics to an idea, I end up convinced that this is actually a good idea. However, Johann and Ghani gave a really elegant treatment of GADTs, which had the effect of making me think that perhaps Haskell-style GADTs should not be added as is to new programming languages!

This is because of the way they give semantics to GADTs as functors |C| -> C, where C is the usual semantic category (eg, CPPO) and |C| is the discrete category formed from C that retains the objects and only the identity morphisms. If I understand rightly, this illustrates that the indices in a GADT are only being used as index terms, and their structure as types is going unused. So it's really kind of a pun, arising from the accident that Haskell has * as its only base kind. This makes me think that future languages should include indices, but that these index domains should not coincide with kind type. That is, users should be able to define new kinds distinct from *, and use those as indexes to types, and these are the datatypes which should get a semantics in the style of this paper.