Some stupid questions on denotational semantics

I am not sure whether it is appropriate to ask for advice in this forum. If it isn't, please feel free to ignore/delete this message.

I have some questions regarding denotational semantics. I know operational semantics very well, but I did not yet understand some basic things about the denotational approach.

1. Every textbook says that "compositionality" is a very important property of d.s. I understand what this means. However, I have nowhere read *why* this is such an important property and what the consequences of being compositional are.

2. The examples of d.s. that are in most textbooks are so simple that they look trivially equivalent to a corresponding operational semantics. However, I read that d.s. is a more "abstract", "high-level" approach to describing the semantics. What would be an example where this property is evident in comparison to op. sem. ? Also, what are in general the advantages/disadvantages compared to op.sem.?

3. I have heard people say that writing an interpreter for a language in Haskell is roughly equivalent to writing a denotational semantics for it. Could somebody enlighten me why (or why not) this is the case?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Denotational Semantics Unleashed

*why* this is such an important property and what the consequences of being compositional are

Basically this is good because you get more meaning for free: you can understand a new composite expression by breaking it down into its simpler known parts.

Also it make it much easier to prove things about the semantics since you can use structural induction.

Also, what are in general the advantages/disadvantages compared to op.sem.

I would say it is a little more difficult to understand for most people (others may disagree ;-) )
and the notation sucks. ;-)

As an advantage, you can reason about the semantic system "at once", rather than having to think about discrete transitional states of affairs.

Haskell is roughly equivalent to writing a denotational semantics

Any relatively pure functional language is going to model a semantic structure that is available "all at once", since you are not allowed to change state.

PS. I hope this isn't a homework assignment. ;-)

Answer to Q1

Not an expert but I can answer question 1.

A language being compositional allows you to reason about it (i.e. prove things) in a nice and simple way. If you want to prove something about a language you can use induction over it's grammer. As long as it's compositional the meaning of a particular term does not depend the context (i.e the larger term it is part of). A denotational semantics translates some language into another one that has this nice property.

An interpreter in Haskell (or scheme) typically works by recursing over the grammatical structure of the language it interprets and passing along an environment structure as it goes. Basically when the interpreter is given a program it looks at the top-level term, packages any context it needs into an environment structure, passes that environment into a recursive call to itself for each of the subterms, and then uses what it gets back to execute the term. If you imagine that the interpreter does not actually execute the term, but instead returns the Haskell (or scheme) code that it would execute, then basically that is also how denotational semantics is defined.

That may sound a lot like how operational semantics works too, the difference is that once you 'evaluate' a term using a denotational semantics you are left with a term in another language. While if you use an operational semantics you actually have to run the program and you are left with a value or a normal form (or a non-terminating process) in the same language you started with.

The practical difference is that with operational semantics you end up reasoning about the rules that define the semantics but with denotational semantics you can reason in the meta-language you translate to. This is often much nicer since there are usually theorms that have already been developed for the meta-language. That does not help you of course if you aren't already familiar with the meta-language (e.g. some obscure mathematical structure). Which is probably why denotational semantics seems to only be popular with those who's background is mathematics.

Update: see this brief paper by C.A.R. Hoare on translating between denotational, operational, and algebraic semantics.

A good source of examples

I recommend "Lisp in Small Pieces" by Queinnec (xlated into English in 1996) (Cambridge University Press). It's an excellent treatise on building a Lisp system. But on the way, he offers a chapter on DS and derives the DS for Scheme in the text. Among some of the side-effects, it helped me understand some of the grousing about Common Lisp coming from the language theorists.

I find his prose to be sufficiently entertaining for me to stick with the math. BTW, I agree with Josh that DS is of most interest to the mathematically inclined.

I am not sure whether it is a

I am not sure whether it is appropriate to ask for advice in this forum.

It's perfectly alright.

For the motivation to use d.s I suggest reading the first page on d.s in Winskel. I think it's a nice summary.

Regarding the similarity of interpreters and d.s I suggest looking at the d.s of Scheme in the R5RS(section 7.2) and comparing them to a meta-circular Scheme interpreter. The correspondence is obvious.

I also suggest reading Reynolds paper on definitional interpreters.

D.S. of Scheme

Regarding the similarity of interpreters and d.s I suggest looking at the d.s of Scheme in the R5RS(section 7.2) and comparing them to a meta-circular Scheme interpreter. The correspondence is obvious.

To make that correspondence even more obvious, I've implemented the Scheme D.S. as a Scheme program, and built a toy Scheme interpreter on top of that. IOW, the interpreter's core is a transliteration of the R5RS D.S. into Scheme. See the Documentation and Source Code Reference to browse the code.

That this really is a "transliteration" is proved by mechanically transforming the Scheme code back into the DS-style greek notation. For the above page, I did that using Christian Queinnec's LiSP2TeX, which he uses to generate all the D.S. in his books, like Lisp in Small Pieces.

Answer to Q2

It used to be the case that it was much easier to prove high-level properties using den.sem. than op.sem., but that den. sem. models only existed for a few languages. These days, this distinction is not so true: game semantics allows fully abstract den. sem. models for languages with many, if not all, of the features that used to defeat these models before; while Andrew Pitts' work on operational semantics shows us how to construct op. sem. in a way that facilitates proofs of the kinds of properties that used to be difficult before. Last of all, the work of Reus.Streicher/Hoffmann shows how it is possible to give abstract machine interpretations from which it is easy to read off fully abstract den. sem. models.

Thanks for all the answers

Now I need to think about them ;-)

A couple of other nice things

A couple of other nice things about compositionality. Program analyses based on abstract interpretation rely on compositionality to enable local as opposed to global proofs of the correctness of the analysis. Also, deriving a compiler by staging an interpreter (in the style of Wand or Danvy) relies on a compositional interpreter (somewhere along the line).

By the way, an interpreter in Haskell won't be automatically compositional of course. This one isn't:

data Term = Var String | Lam String Term | App Term Term

type Env = [(String, Value)]

data Value = Closure String Term Env

eval :: Term -> Env -> Value
eval (Var x)   env = fromJust (lookup x env)
eval (Lam x m) env = Closure x m env
eval (App m n) env = let Closure x m' env' = eval m env in
                        eval m' ((x, eval n env) : env')
because the rule for App recurs on a term that is not a subterm of its input; but this one is:
data Value = Closure String (Env -> Value) Env

eval :: Term -> Env -> Value
eval (Var x)   env = fromJust (lookup x env)
eval (Lam x m) env = Closure x (eval m) env
eval (App m n) env = let Closure x f env' = eval m env in
                        f ((x, eval n env) : env')