Simple Question - Beta reduction and pattern matching (compile time, static)

I've Googled around and not found any obvious candidate papers to cover what I know must be a common situation, if only for "destructuring" let-bindings. I'm looking for a paper that mixes both theory and practice information (it's what I relate to best).

The simplest example is tupled functions, where we match against an irrefutable tuple pattern. Here is an inlined tupled "add" function where we want to beta reduce the function definition with a tuple constructed argument (presume all arguments are normalized to some definition of values - constants, variable references, etc. - where all variables have unique names.

add (a, 4)
=> (fn (x,y) -> x + y) (a, 4)
=> (a + 4)

A key feature is to completely eliminate 1) the constructor application and 2) any matching operations at compile time in the first place. This is true of the "fancier" example below as well.

A fancier algorithm would statically select a single case from a constructor case-defined function. Here we have a "find" function over three list representations, each with a unique constructor. [Sorry for the convoluted example.]

find x (ArrayList [1,2,3,4,5])
=> (fn e (ArrayList repr1) -> AL-def
| e (BinTreeList repr2) -> BL-def
| e (RBTreeList repr3) -> RB-def) x (ArrayList [1,2,3,4,5])
=> AL-def[e -> x, repr1 -> [1,2,3,4,5]]

One static analysis issue with which I'm concerned is whether an ANF-style normalization of function application might "hide" constructor applied function arguments, possibly requiring some more complex data flow analysis to achieve the desired "beta reduction across patterns and constructors".

Any and all pointers to suitable B-reduction algorithms of this kind greatly appreciated.


- Scott

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


I believe that Andrew Kennedy's CWCC paper treats at least your simple example, both in ANF and CPS intermediate forms. See the rewrite rules for pairs and case expressions.

Thanks for the reference.

Thanks for the reference. Going through the example monadic reductions covering the elimination of a pair constructor, the fun part seems to hinge on "val" marking "trivial computations."

While it makes intuitive sense, I wonder what the class of "trivial computations" might be (maybe it's arbitrary?) but, more importantly, what the "ontological status" of these "val" computations might be. They seem to hover between a let bound (here constructor) application and the same application sort of suspended or pending and possibly awaiting reduction.

- Scott

I don't see the problem

What is it specific you are after? Out of order partial evaluation of redexes during compile time is a standard 'trick' to optimize programs. If anything, I guess most books on functional compilers, like the one by Simon Peyton Jones, should have an entry on it.

The ANF form also shouldn't matter, a redex remains a redex, even in ANF form.

Thanks, I'll look at the SPJ

Thanks, I'll look at the SPJ book as well. Beyond the well documented beta/eta reductions, I'm a neophyte, A large quantity of information is spread around and in different contexts, so I find that I pick up various bits of wisdom and methodology here and there. So even having your terminology "out of order partial evaluation of redexes" is useful for me to narrow the search space. Thanks again. - Scott


Redexes are reducible subexpressions in a combinator language, or in lambda calculus. You just need to write down the rules of beta-reduction (including ANF forms, if you have those), and see which redexes are safe to reduce under a certain rewrite scheme. (The rules for an impure language are different than for pure, or lazy, ones.)

That's about it, it isn't too hard when you get the notation down.