archives

Simple Question - Beta reduction and pattern matching (compile time, static)

I've Googled around and not found any obvious candidate papers to cover what I know must be a common situation, if only for "destructuring" let-bindings. I'm looking for a paper that mixes both theory and practice information (it's what I relate to best).

The simplest example is tupled functions, where we match against an irrefutable tuple pattern. Here is an inlined tupled "add" function where we want to beta reduce the function definition with a tuple constructed argument (presume all arguments are normalized to some definition of values - constants, variable references, etc. - where all variables have unique names.

add (a, 4)
=> (fn (x,y) -> x + y) (a, 4)
=> (a + 4)

A key feature is to completely eliminate 1) the constructor application and 2) any matching operations at compile time in the first place. This is true of the "fancier" example below as well.

A fancier algorithm would statically select a single case from a constructor case-defined function. Here we have a "find" function over three list representations, each with a unique constructor. [Sorry for the convoluted example.]

find x (ArrayList [1,2,3,4,5])
=> (fn e (ArrayList repr1) -> AL-def
| e (BinTreeList repr2) -> BL-def
| e (RBTreeList repr3) -> RB-def) x (ArrayList [1,2,3,4,5])
=> AL-def[e -> x, repr1 -> [1,2,3,4,5]]

One static analysis issue with which I'm concerned is whether an ANF-style normalization of function application might "hide" constructor applied function arguments, possibly requiring some more complex data flow analysis to achieve the desired "beta reduction across patterns and constructors".

Any and all pointers to suitable B-reduction algorithms of this kind greatly appreciated.

Thanks!

- Scott

Can I express variable occurence ranges in logic?

I wonder whether there is some way out to express variable occurence range in logic. I have the following query:

exists X,Y(p(X) & q(X,Y) & r(Y))

I can use quantifiers and mini-scope to express that a variable first occurs, when I view the conjunction right associative (xfy):

exists X(p(X) & exists Y(q(X,Y) & r(Y))).

In the above I see that a usage of Y starts after the p(X). Similarly I can use quantifiers and mini-scope to express that a variable last occurs, when I view the conjunction as left associative (yfx):

exists Y(exists X(p(X) & q(X,Y)) & r(Y)).

Now in the above I see that the variable X is last used in q(X,Y) and then not anymore.

But what I would like to have is a formalism that allows to express both information, start and end of a variable use. Graphically it would express:

p(X)   &    q(X,Y)   &    r(Y)  
+------X---------+  
            +--------Y-------+

Is this possible in logic? A logic that would come with semantics, substitution rules, etc.. Somehow I have the feeling it would violate the Frege principle. On the other hand approaches such as Continuation Passing Style formulation could eventually solve the problem.

Best Regards

Google's Dart announced

A while back we learned about Google's thoughts on Javascript. Well, it seems Google's Dart Language is now live.

Dart is a new class-based programming language for creating structured web applications. Developed with the goals of simplicity, efficiency, and scalability, the Dart language combines powerful new language features with familiar language constructs into a clear, readable syntax.

The full specification (PDF)

A feature I find interesting is that you can add static types:

Dart programmers can optionally add static types to their code. Depending on programmer preference and stage of application development, the code can migrate from a simple, untyped experimental prototype to a complex, modular application with typing. Because types state programmer intent, less documentation is required to explain what is happening in the code, and type-checking tools can be used for debugging.

this will improve reliability and maintainability, I imagine, right?