## Simply efficient functional reactivity

Simply efficient functional reactivity. Conal Elliott.

Functional reactive programming (FRP) has simple and powerful semantics, but has resisted efficient implementation. In particular, most past implementations have used demand-driven sampling, which accommodates FRP's continuous time semantics and fits well with the nature of functional programming. Consequently, values are wastefully recomputed even when inputs don't change, and reaction latency can be as high as the sampling period.

This paper presents a way to implement FRP that combines data- and demand-driven evaluation, in which values are recomputed only when necessary, and reactions are nearly instantaneous. The implementation is rooted in a new simple formulation of FRP and its semantics and so is easy to understand and reason about.

On the road to efficiency and simplicity, we'll meet some old friends (monoids, functors, applicative functors, monads, morphisms, and improving values) and make some new friends (functional future values, reactive normal form, and concurrent â€œunambiguous choiceâ€).

I'm not sure exactly where to classify this submission to ICFP 2008, but I think many here will be interested in it.

## Comment viewing options

### FrTime

FrTime (in PLT Scheme) is a FRP implementation that combines data- and demand-driven. I can see there is difference in this paper and in FrTime's approach. But I'm not convinced the approach here is better.

### FrTime comparison

Thanks. I unintentionally omitted FrTime from the Related Work section, and I'll add some remarks. It's perhaps something of an apples-and-oranges comparison, as the semantics are quite different: pure & synchronous vs impure & asynchronous.

### They're both fruit

I think these are the most important sort of comparisons. It's easy to say "We're just like the FrobNozzle system [12], but with the extra blarg feature". The interesting comparison talks about the impact of the fundamental choices you're making. For example, is purity important to the semantics of FRP in your system? Is it important to the optimizations? Or is the reason for purity just the underlying use of Haskell? Is synchronous update easy for the programmer to understand? Does asynchrony mean their system if faster? Slower?

These questions are obviously harder to answer than most of what we write for related work. But they are the ones that make a real difference in the end.

### fundamental choices

Thanks, Sam. I like these questions.

The most fundamental choice for me is simple denotational semantics. Determinacy (in spite of the multi-threaded implementation) and lack of side-effects both have a huge impact on the semantic simplicity. The denotational semantics literally is as simple as

    B a = Time -> a
E a = [(Time,a)]


and semantic definitions are as simple as

    at (fmap f b) = f . at b


Beside simplicity, another fundamental semantic issue for me is supporting continuously (not just frequently) varying values. I don't know if FrTime had continuously varying behaviors or just discrete ones. I think its temporal primitives seconds and milliseconds were integer-valued, so perhaps it had only discretly-changing behaviors (step functions).

After formal simplicity, my next goals are responsiveness and efficiency of implementation. For these goals, the purity & simplicity of the formal semantics are both challenging (especially determinacy) and helpful (to easily and rigorously justify optimizations).

After re-reading the FrTime paper today, I see that the term "synchronous" is probably being used in (at least) two distinct ways. I used the word to refer to a semantic property, e.g., that fmap f b undergoes transitions exactly when b changes. I did not mean the implementation property of sampling all behaviors at the same time, as in most previous FRP systems, which is very inefficient.

So: no, the semantic purity (and more generally, simplicity) is not just a by-product of using Haskell; and yes I believe semantic purity much simpler for the programmer and user to understand. I don't know about the net impact on performance, though I wouldn't sacrifice the semantics to get a faster implementation.

### I am not familiar with FRP.

I am not familiar with FRP. I see there is a construct for a continuous function of time, and one for a series of discrete events. Are they these?

B a = Time -> a
E a = [(Time,a)]

How do you combine them? Can you make a series of events into a function of time?

### Continuous & discrete

B and E are the semantic domains for the library type (constructor)s Behavior and Event, which capture the continuous and discrete aspects FRP. The domains relate to the data types via via semantic functions:

at   :: Behavior a -> B a
occs :: Event a    -> E a


For making a behavior from an event (from what you called "a series of events"), the basic tool is

switcher :: Behavior a -> Event (Behavior a) -> Behavior a


The first argument is the initial behavior, and the second provides a timed stream of new behaviors to switch to.

The other main connection between behaviors and events is the ability to take a boolean behavior b and construct an event that occurs each time b changes from false to true.

### Sounds analogous to Lucid

Lucid and its related languages use an evaluation model called eduction, a form of demand driven dataflow, coupled with a caching mechanism nominally termed a "warehouse".

Although I'm sure that Conal's new FRP formulation is not isomorphic from the basic description it seems to address the same issues.

### Labview?

I wonder I don't see comparisons of FRP to LabView. Is there large difference between FRP and the dataflow driven user interaction of Labview? Or is it because Labview is proprietary? Or because the graphical nature leads reasearchers to think of it as a toy? Or has it just been around too long (I guess no one really compares their shiny new languages to C)?

### HDL

If LabView is mentioned, I must plug-in the Hardware Description Langages and the way their simulators work : VHDL, Verilog, SystemC,...

It is essentially PUSH based event-driven interactions, with the availability of arbitrary scheduling of future events

Clock = NOT Clock after 10 ns;

The mixed mode analog+digital variants (like VHDL-AMS) are somewhat more functional with the continuous behaviour of analog parts.

[rant](Bbtw, yes, LabVew is a toy ;-)[/rant]

### not really

These are similar to other early data flow languages: you define the next time step's value in terms of the current. This leads to tight cycles - I don't think these systems typically feature reasoning to determine whether fixpoints exist in such cases, breaking synchrony assumptions. Regardless, citing them seems redundant to me. [Not to say they're not important; for example, there's some fun work going on in the FPGA world to more directly support them].

I haven't really seen good survey papers providing sufficient depth on the language properties of modern data flow / reactive systems. By that I mean the couple I've read both skipped FRP and Lucid/Synchrone styles of languages, which, to me, were fairly significant departures from Esterel et al.