I "got" imperative programming throught data binding

Macromedia flex has a very powerfull data-binding mechanism. When we started to use Flex we had heaps of code to manage events. We were also suspicious about mixing content and behaviour. What a mistake!
Progressivly we've been using the declarative syntax of data-binding and our amount of code has reduced enourmously.
In fact we don't event think of it as code, we think of it as magic statements that just glue everything :)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Data binding doesn't necessitate mixing content and behavior any more than a standard model-view-control pattern implemented with call-backs would. The view is more naturally expressed, as is control. The model stays the same, or maybe even become tighter. It just breaks you away from manually handling dependencies. More concretely, call-backs are useful, but most of the time we use them to manually handle simple dependencies, so data binding is a language feature to handle this scenario for us in a natural, declaritive way.

I'm still not sure how to feel about mutation and the handling of objects. Mutation when binding would seem to be bad style.. the semantics of objects and mutation has been worked on, but I haven't read that work yet :)

I think the surfacing of reactive programming in commercial work is interesting. The original Haskell work (FranTK etc) seems pretty cool though I haven't gone beyond skimming a few papers, and one engineer helping the Flex team was influenced by another language, K. My personal bias is towards FrTime (in PLT Scheme). I'm curious about what'll happen with Microsoft's Sparkle, I haven't looked in that area recently - I only see press releases now and then. While not close to the people involved, the MS Research's past involvement in the Tk work suggests there may be something going on here to me.

- Leo


You might want to look at SuperGlue. Its a language based on FRP-like signals except in the context of an object-oriented declarative paradigm.

So is FrTime

So, in a way, is FrTime. See our paper on linking to OO toolkits.

FRP internals

Although I've gone through SuperGlue, FrTime, FRP, Yampa papers, there are some basic things I haven't been able to understand. Could an API provide the functionality of reactive programming, or do FRP expressions need to be parsed and converted to a DAG (graph) of dependencies? Although FRP does away with threads, events and call backs, is it true that those things are simply abstracted away for the user of an FRP system, under the covers call backs are still used? (I suppose they have to be).

To put it simply, if I have an expression "print x+y" where both x and y are continous values (so print is evaluated every time either x or y changes), what is this expression translated to by the compiler?

Haskell implementations loose me with talk of arrows and FrTime probably relies on scheme's ability to process its own code, what steps would a programmer have to take if he wanted to use FRP in an industrial setting using a language such as F# (OCaml derivative for the .NET platform) or Scala (Functional lang. for JVM)?

Other than the actual implementation, I'm curious to find out how this relates the so called 'rules engine.' Rules engines are marketed as a way for simple, declarative rules that respond to incoming events in an effecient manner...FRP could be exactly the same thing, right?

In any case, this is very interesting stuff. There is a video of a short FrTime presentation at the following address; http://ll3.ai.mit.edu/. (Session 2, near the end).


Cells is a library that provides something close to FRP (automatic dependency tracking, propagation of updates and actions on changes) in Common Lisp, without doing any language-level hacking. The programmer instanciates "cells", CLOS objects defined via a macro. Cells then simply uses :around methods (I believe) on the accessors to track dependencies and updates dynamically. So, yes, it seems like an API can provide the required functionality.

Forward Chaining

Other than the actual implementation, I'm curious to find out how this relates the so called 'rules engine.' Rules engines are marketed as a way for simple, declarative rules that respond to incoming events in an effecient manner...FRP could be exactly the same thing, right?

Forward chaining rule systems usually have a functional language that does most of the work. the rules have an "if clause" that defines when the script or functional part in the "then clause" should be executed. Changes in the database trigger a search for any rules that might now be eligible to fire. Two popular examples are clips , jess .

forward chaining different from FRP?

All the FRP papers talk about a sort of data-flow dependency (either an actual graph, function combinators, etc.). Is that different from forward chaining? Is one more powerful than the other, or are they different names for the same thing?.

A simple answer

The simple answer is to say that "reactive programming" can be applied to any language. Applied to rules one gets forward chaining. Applied to functional programming one gets FRP. "Function combinators etc." are just theory that naturally applies to the functional paradigm. Similar thinking can be applied to rules. Perhaps it is really a question of choosing a language and we will never get agreement on that!

What does it mean in practice?

The comparison between forward chaining and FRP is interesting and raises the more basic question of what is the real difference between rules and functions. Why choose one or the other. To me it comes down to this. Rules normally use predicates and pattern matching while functional languages have functions and return values. This makes a big difference but it isn't easy to say what it all comes down to in practice.

No compiler support needed for FRP

Although a dependency graph may be used (I think FrTime works this way; correct me if I'm wrong), such a graph can be constructed solely via an API, with no compiler trickery (though some type trickery may be necessary in languages such as O'Caml). On the other hand, systems such as Yampa simply let you construct functions with internal "state" via arrows (again, implemented with user-level code) and have no explicit notion of a dependency graph; the entire system is just one big function ("signal function" in Yampa terminology) which is recomputed whenever the input changes.

Implementation in languages such as F# and Scala is perfectly reasonable; I've made an implementation myself of an amalgam of Yampa and FrTime in O'Caml. I'm not quite ready to make a release but if you're interested you can take a peek at the SVN.


I hope you relase a description of how it works, along with the code :)