Lambda the Ultimate

inactiveTopic Deriving backtracking monad transformers
started 10/16/2002; 2:05:38 PM - last post 10/17/2002; 9:24:33 AM
Ken Shan - Deriving backtracking monad transformers  blueArrow
10/16/2002; 2:05:38 PM (reads: 2417, responses: 2)
Deriving backtracking monad transformers
Hinze, Ralf. 2000. Deriving backtracking monad transformers. In ICFP '00: Proceedings of the ACM international conference on functional programming, vol. 35(9) of ACM SIGPLAN Notices, 186-197. New York: ACM Press.
In a paper about pretty printing J. Hughes introduced two fundamental techniques for deriving programs from their specification, where a specification consists of a signature and properties that the operations of the signature are required to satisfy. Briefly, the first technique, the term implementation, represents the operations by terms and works by defining a mapping from operations to observations -- this mapping can be seen as defining a simple interpreter. The second, the context-passing implementation, represents operations as functions from their calling context to observations. We apply both techniques to derive a backtracking monad transformer that adds backtracking to an arbitrary monad. In addition to the usual backtracking operations -- failure and nondeterministic choice -- the Prolog cut and an operation for delimiting the effect of a cut are supported.
A shining and useful example of deriving programs from specifications.
Posted to functional by Ken Shan on 10/16/02; 2:09:00 PM

Ehud Lamm - Re: Deriving backtracking monad transformers  blueArrow
10/17/2002; 7:35:21 AM (reads: 726, responses: 1)
Indeed a shining example.

It may be a bit difficult to understand the deriviation process, if you are not familiar with Hughes's work. Go to section 3 of Hughes's paper.

Notice that if you come with a software engineering notion of specification as abstraction (i.e., hiding implementation details) you will have a bit of adjusting to do.

Ehud Lamm - Re: Deriving backtracking monad transformers  blueArrow
10/17/2002; 9:24:33 AM (reads: 773, responses: 0)
Let me elaborate.

When we think about building sw abstractions, it is common to note that the first implementation we build is rarely the most efficient implementation possible. It may be fast enough, but often we need to change the implementation to get better performance. For example, we may need to change the data structures used, or the choice of algorithms.

Still, if the abstraction is good, which means it has a well designed interface, we can change the implementation while keeping the interface stable. This is one of the major benefits of information hiding and data abstraction (another benefit is seperate compilation, by the way).

The simple implementation we designed at first, can be any implementation that supports the specification. Wouldn't it be great if we could arrive at some such implementation, simply by making transformations on the (algebraic) specification?

Of course, it would be even better if we could than optimize the implementation by using program transformations on the basic implementations!

(Of course, this will only get you so far. It will not reinvent new algorithms for you. But heck, you are paid to do some of the work yourself!)