Pure Pattern Calculus

Came across the Pure Pattern Calculus a little while ago, and it seemed to provide a simple, yet promising synthesis of many diverse language patterns. I haven't seen it discussed here, so perhaps it will be of interest:

Abstract. The pure pattern calculus generalises the pure lambda-calculus by basing computation on pattern-matching instead of beta-reduction. The simplicity and power of the calculus derive from allowing any term to be a pattern. As well as supporting a uniform approach to functions, it supports a uniform approach to data structures which underpins two new forms of polymorphism. Path polymorphism supports searches or queries along all paths through an arbitrary data structure. Pattern polymorphism supports the dynamic creation and evaluation of patterns, so that queries can be customised in reaction to new information about the structures to be encountered. In combination, these features provide a natural account of tasks such as programming with XML paths.
As the variables used in matching can now be eliminated by reduction it is necessary to separate them from the binding variables used to control scope. Then standard techniques suffice to ensure that reduction progresses and to establish confluence of reduction.

Bondi is their experimental programming language based on the pure pattern calculus.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Should it be exposed as a programming model?

It seems to me that the way even constructors can be captured by pattern variables makes it hard to keep things well-encapsulated. For example, if one implements lists with binary trees, one cannot expose multiple binary constructor/destructor pairs (e.g. cons and splitting in halves). To a certain extent, that can be alleviated by overloading on arity. I suppose that it can also be "solved" by first expliciting which constructors to expose by default via a ~casting operation.

EDIT: I was also wondering how the definition of generic equality works in the presence of a wildcard constant. It's defined as a constant, not as a variable, so its behaviour can't be fixed by making the set of bindings variables empty.

On another more pragmatic note, it seems to me that making the behaviour of patterns dependent on data that can only be known at runtime (the binding variables of patterns, when the latter are passed around) will make it hard to compile the language well, not to mention the way every datum must be a pattern. At least, binding variables must be used linearly. Being able to receive arbitrary patterns means that one either compiles datum to patterns as needed, or makes every datum a pattern, so that constructors build both datum and patterns in parallel. I don't think that sounds too bad, except if you want to support multiple constructors.

pure pattern calculus

Hi everyone, I'm new to this forum and have just the read the post.

1) The pattern calculus (PC) is good for handling multiple representations of data structures. For example, if the list cons x xs is given by the binary tree node (leaf x) xs then one can write a pattern-matching function of the form

next ==
leaf \x --> x
| cons \x \y --> x
| node \x \y --> next x

(where \z indicates that z is bound). The pattern for the cons case reduces to

| node (leaf \x) \y -> x

2) Generic equality on wildcards is no different from any other value

equal _ reduces to

_ --> true
| \y --> false

so that the wildcard _ is equal to anything. Again, equal (leaf _) reduces to

leaf _ --> true
| \y --> false

which will match with any leaf.

3) compilation model. Implementation work is not sufficiently developed to give complete answers on this point, but if the reduction rules are simple then there should be a good implementation.