simpler representation of operational semantics?

While trying to learn the formal notation used in TAPL (and many other papers), I found this paper ... Syntactic Theories of Sequential COntrol and State by Matthias Felleisen. I didn't find this mentioned in any previous discussion on LTU (even though it is from 1992). This paper, as far as I can tell, seems to represent the same idea of TAPL notation in a much more concise manner. For example (horrible 'summary' follows), in Pierce's book (page 72), some rules are listed as follows:

Syntax:
t::= x | Lx.t | t t
v::= Lx.t

For t -> t'

t1 -> t1' | t1 t2 -> t1' t2 [E-APP1]
t2 -> t2' | v1 t2 -> v1 t2' [E-APP2]
(Lx.t12) v2 -> [x -> v2]t12 [EAPPABS]

According to Felleisen's paper, these rules could be written as:

For t -> t'

C[t] -> C[t']

If we extend the syntax with the following:
C::=[] | (e C) | (C e) | Lx.C

In other words, we don't have to have such verbose rules because 'C' can represent a 'hole' for a term or its reduction.

Ok, this is obviously not a summary of the paper but a question. Is my understanding correct? Is it of any value? I would think so, if one is trying to build 'executable' operational semantics. No?

by the way, I found this article through another paper on Operational Semantics for Scheme. This paper (I haven't attempted reading it yet) has further discussion of Felleisen's paper.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Unless I'm Mistaken...

...this essentially an application of an explicit substitution calculus to the question of operational semantics. That is, I think they're equivalent, for some such calculus. But I'm not positive, because the PLT-Redex system for PLT Scheme supports that paper, but there's a distinct "explicit substitution" example, so I think the paper's mechanism must be more general, somehow, than "explicit substitution."

In any case, I can't recommend DrScheme and PLT-Redex highly enough in exploring programming-language semantics.

On edit: I found this interesting paragraph in Building Interpreters With Rewriting Strategies:

There are different styles of operational semantics. In natural semantics supported by the CENTAUR system an expression is directly related to its value, possibly through the use of recursive invocations of the evaluation relation. In rewriting semantics, supported by many systems including ASF+SDF and ELAN, an expression is normalized by successive applications of reduction rules, which define a, generally small, modification of an expression. In the approach of syntactic theories, supported for example by the SL language, a set of reduction rules is extended with evaluation contexts that limit the positions in which reduction rules can be applied. Thus, evaluation contexts overcome a limitation of pure rewriting based semantics. Exhaustive application of reduction rules is usually not an adequate description of the semantics of a language. Therefore, the gap between evaluation strategy and rewriting strategy (provided by the rewriting engine) is usually filled by explicitly specifying an evaluation function instead of using pure rewrite rules.

Context-sensitive reduction semantics

Your Felleisen semantics isn't quite right, you have to also include the rule for actually applying an abstraction:

C[(Lx. e) v] → C[e[x := v]]

But otherwise you've got the right idea: you split rules from "search" rules that don't do anything but find the place where evaluation really ought to take place (Pierce's E-App1 and E-App2 rules) and the rules that actually take an evaluation step (Pierce's E-AppAbs); the former are factored into the definition of evaluation contexts and the latter become reduction rules.

The style Pierce uses is sometimes called "Plotkin-style small-step operational semantics" and Felleisen's is called "Felleisen-style context-sensitive reduction semantics." The advantage of the Felleisen style is that it's slightly more compact and highlights the actual evaluation rules as opposed to the search rules, and also (somewhat more importantly) it gives a name to the context that can be important in, for instance, giving semantics to call/cc. The Wright and Felleisen paper "A Syntactic Approach to Type Soundness" is probably a good place to start reading.

Anyway, a side question: how did you find that tech report? It's my master's thesis; my advisor and I put it in tech report form just because we wanted to cite it in another paper and we never did anything to publicize it and yet somehow several people seem to have found it, so I'm curious as to what's drawing people to it. (Incidentally, the first half of that paper was published in RTA 2004 and the second half is being published in the 2005 Scheme Workshop. Those papers are probably better things to look at for most purposes.)

google search!

I found your thesis (small world!) after searching for something like "operatonal semantics for scheme." I don't recall why I was searching for that, most likely because since I started learning about PLT, I've been so overwhelmed that I'm reduced to taking random PLT related words and putting them in google :)

Most of the papers I have seen seem to have Plotkin style operatonal semantics...is there a reason Felleisen style didn't catch on?

I'll take a look at the other paper you mentioned as well. Thanks.

Felleisen style semantics

IIRC, the Felleisen style of operational semantics are sometimes used in the area of process calculus. Unfortunately, I can't find any good reference at the moment.

Mikael 'Zayenz' Lagerkvist

I prefer...

...using explicit congruence rules rather than using evaluation contexts.

The reason is that when doing a soundness proof, you have to first define what "filling a hole" actually means, and then prove some lemmas to make it work with typing derivations. None of this is terribly hard, but it's tedious. In a paper proof, you can elide this work, but when formalizing in something like Twelf, you can't skip it.

With explicit congruence rules, you have a less elegant presentation of the evaluation order, but in return the soundness proof is simpler -- all of those cases are just trivial applications of the induction hypothesis.

Fundamentally, I think it's a matter of taste. When you don't have call/cc in the language, I think it's somewhat better style not to use evaluation contexts.

or more generally

when having explicitly-named contexts isn't buying you anything. (Exceptions are another construct that's easier to model with evaluation contexts than without, for instance.)

Derivatives of Regular Types = The Type of One Hole Contexts

There is a vague association in mind to some interesting work by Connor McBride The Derivative of a Regular Type is its Type of One-Hole Contexts.

I'm sure I've heard Graham Hutton mention a connection between small step and big step semantics via the 'zipper' functions talked about in the paper. They are the functions that plug in the contents of a hole (anti-derivative).

I'm not sure if a semantics in the style of calculus could be called simpler |:^) but something definately worth exploring.

action semantics

So where do action semantics fit into the picture? From what I understand the purpose of action semantics is to move in the direction of using an abstract semantic notation to actually build computer executable systems (rather than using a notation for descriptive purpose).


I've read the claim (can't remember where) that just as context free grammer (BNF) is often used to produce actual parsers, action semantics can be used to generate actual compilers (which produce parse trees, do static type checking and implement relevant runtime services (garbage collection?))...any thoughts on that?