archives

How to teach Functional Programming... in Flash???

A question from the: "You can do my homework!" department.

Since I am an avid lambda reader, and given the recent thread on why people don't use functional programming, I thought I might ask for assistance from the lambda-crowd with a particular problem I have.

These days I am coteaching a Flash course. The language used in Flash is actionscript, which is a javascript derivative, which is a Lisp derivative.

Therefor, Flash might be an excellent aid in teaching functional programming.

However... At the moment my imagination kind of falls short in what kind of interesting examples one could use in a course. I thought about some calculator to showcase closures, or some functional spreadsheet implementation in Flash. They seem boring to me.

So, how would one teach functional programming in Flash, and more to the point: What kind of examples would one use?

Cheers,
Marco

Lowering: A Static Optimization Technique for Transparent Functional Reactivity

Lowering: A Static Optimization Technique for Transparent Functional Reactivity, Kimberley Burchett, Gregory H. Cooper, and Shriram Krishnamurthi. PEPM 2007.

Functional Reactive Programming (FRP) extends traditional functional programming with dataflow evaluation, making it possible to write interactive programs in a declarative style. An FRP language creates a dynamic graph of data dependencies and reacts to changes by propagating updates through the graph. In a transparent FRP language, the primitive operators are implicitly lifted, so they construct graph nodes when they are applied to time-varying values. This model has some attractive properties, but it tends to produce a large graph that is costly to maintain. In this paper, we develop a transformation we call lowering, which improves performance by reducing the size of the graph. We present a static analysis that guides the sound application of this optimization, and we present benchmark results that demonstrate dramatic improvements in both speed and memory usage for real programs.

Whenever I read about compiler optimizations, I try (with mixed success) to relate them to transformations in the lambda calculus. I haven't managed to figure out what's going on with the dip construct the authors propose, but I would guess that the place to look is the proof theory of the necessity operator in modal logic -- dataflow programming can be seen as a kind of stream programming, and streams form a comonad over the lambda calculus, and comonads give semantics to the modal necessity (box) operator.