Limitations of FRP?

I'm doing some research on programming paradigms that deal with time, like functional reactive programming, and I was wondering if anyone has explored how far one can go with FRP? From my own personal experience when doing Superglue (an OO reactive PL based on FRP-like signals), I had to abandon FRP when working on incremental compilation and language-aware editing: although FRP should be suitable for this problem (lots of consistency to maintain between code, AST, type system, and UI), it turned out that the problem required adding things to collections in distributed locations (e.g. in parse tree computations), whereas FRP seems to require everything for some state to specified centrally.

The best source I can find for pushing FRP for more complex programming domains is Antony Courtney's thesis. This still seems to be where things are at.

The last time we discussed this topic was in 2007. Are there any new developments in FRP expressiveness?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

State in FRP can be

State in FRP can be distributed throughout the program. It's modeled using folds (or integrals, accumulators) over time. This is actually one of the significant weaknesses of FRP. It means FRP cannot be staged or layered compositionally: if you reactively update the FRP program from an earlier stage, you lose important state encapsulated in the FRP program of the later stage. This sad design is half of why I abandoned FRP.

The other significant weakness of FRP is dealing with open systems, i.e. to model independent agents sharing resources and services. Such things can be modeled, but at least my efforts have been been awkward and unscalable.

Anyhow, the most recent works I've been following in FRP are from Evan Czaplicki (Elm language) and Apfelmus (reactive-banana). Both are focused on FRP based GUIs.

Elm didn't seem to offer

Elm didn't seem to offer anything new except being web friendly. I didn't see anything new in reactive banana either, it seems that FRP seems to be fixed as you described. Is it just an intrinsic limitation with the approach?

Intrinsic limitations

Most of the limitations of FRP are intrinsic to the nature of 'functions' and signals/events. There has been some success combining FRP with substructural types (cf. wormholes or neelk's bounded space FRP). These work within the limits of FRP, but do simplify certain expressions and performance. FRP can also be used effectively with dependent types.

I wasn't aware of the

I wasn't aware of the wormhole work, thanks!

I'm perhaps asking a question that has no sensible answer in FRP: what kinds of programs can be written with it elegantly, irregardless of performance problems? Given a lack of generality, such answers are typically by example (e.g. here is how to do animation, robotics, simple games with FRP), but they should also include negative examples (here is what you can't do in FRP, very nicely at least) that seem to be lacking.

What if we look beyond purity to impure FRP (or FRP-inspired if you accept FRP as pure) systems like FrTime and Flapjax? We already know effects are compatible with these systems, but signals seems to prevented from initiating these effects (they can only react to them).

Control systems and signal processing

This is my opinion. The most elegant programs for FRP are those that involve control systems or signal processing. Synchronous dataflow languages (such as Lustre and Esterel) are good at the same areas. Control systems and signal processing are an excellent fit for FRP because they are naturally closed systems that tend to make very limited use of state (e.g. small buffered windows, dampening or decay models), and thus encounter neither of FRP's weaknesses.

The use of FRP for small games and GUIs is possible, and at least has some advantages compared to 'callback hell'. But there is a lot of pressure for GUIs and games to be modeled as open systems - e.g. to support multi-app, extensible app, multi-user, shared services. Further, GUIs and games are extremely stateful, and FRP's local state hinders reactivity at the app layer. I don't believe FRP will work well by itself. We end up gluing edges between FRP services with other paradigms. To improve composition and avoid discontinuity, I really would prefer a model that works well by itself.

An ad-hoc mix of imperative effects with signal processing - which I think you mean when you say 'impure FRP' - strikes me as a terrible idea. Like many hybrids, the union inherits the limitations and weaknesses of both its parents. But I believe that a less ad-hoc mix, i.e. to support effects without undermining local equational reasoning, could be very successful. My RDP is such an effort.

Synchronous dataflow languages

Synchronous dataflow languages (such as Lustre and Esterel) are good at the same areas.

Nice that you mention them. I always have a slightly sad feeling when I hear the FRP community, which doesn't seem to know about this previous work. It's very uncomfortable: I don't know enough about FRP to authoritatively assert that they missed something or reinvent existing work, but there is nagging question that the people that do know enough might not have done their homework on this -- or only some of them, with a large part of the community not benefiting from this prior work.

which doesn't seem to know

which doesn't seem to know about this previous work [Lustre/Esterel]

The earlier FRP papers I've read often cited these synchronous dataflow languages under related work. FRP is higher-order however, where these early synchronous languages were first-order. This allowed compilation to efficient machine code, but only addresses a subset of FRP.

I think we've all gone

I think we've all gone through these papers before, but their domains are very niche. The nice thing about the Fran paper is that it brought more power to the paradigm and focused on examples that were relatable (animation).

FRP is definitely inspired by these earlier languages, but has been easier than those systems to relate to, so a lot of just forget the work previous to it.

as a layperson

i wish somebody would do a great tl;dr; summarization of the salient pros and cons from these sytems. like, in SDF vs. FRP, how well do they (a) handle theoreticaly stuff and (b) handle pragmatic stuff?

how do they handle cycles/loops? continuous vs. discrete data? how well do they perform in the real world? how easy are they to debug? how many people have used them (to flush out the inevitable bugs in the runtime and libraries)? which are open source? who has achieved DoD MilSpec 5-9's or whatever? etc.

You can't relate to the stopwatch?

I struggled with Berry's runner, but his stopwatch is quite cute.

The later work by Marc Pouzet et al is higher order (Lucid Synchrone) and roughly contemporaneous with/prior to FRP. That the FRP crowd ignored that work is much less defensible as they did not compare how their attempts to tame their resource leaks were better than just assuming synchrony, which has a long and proud history of being compositional and predictable (cf digital circuits).

In any case I find your last para a bit of a furphy: just because the work is obscure doesn't mean you get to ignore it! Are we scientists or what?

I'm just saying out loud

I'm just saying out loud what unfortunately happens quietly, don't shoot the messenger :) Domains create communities, but also erect barriers, given that the community has its own priorities and culture, and eventually can become quite insular.

Bridging domains takes two: producers must communicate and write papers in way that attempts to transcend their community, consumers must be open minded and learn about other communities so they can grock more of their work. There seems to be very little incentive to do this in academia, especially since peer review seems to reinforce insularity rather than promote more outreach. And scientists are mostly poor and always looking for the best bang for the buck...

See below for a discussion about SAC and FRP. Even though both mostly attend the same conferences (ICFP, POPL, PLDI), there is very little transfer between them.

We've looked at almost

We've looked at almost everything mentioned here and I've discussed with many of the authors. Careful with the broad strokes..

I recall a survey paper from 1-2 years ago (from Martin's group? Googling shows one from RIT). I'd love to see experience reports. We built real apps with it, and I know a couple of companies as well -- from our own, I found concerns like space leaks or FrTime-style approach to equivalence to be not too big of a deal, but more fascinating were conventions and practices core to readability. Likewise, redesigning libraries with FRP concepts was liberating.

Perhaps you are referring to

Perhaps you are referring to the Guido Salvaneschi paper done in Mezini's group? Abstract:

Reactive applications are difficult to implement. Traditional solutions based on event systems and the Observer pattern have a number of inconveniences, but programmers bear them in return for the benefits of OO design. On the other hand, reactive approaches based on automatic updates of dependencies - like functional reactive programming and dataflow languages - provide undoubted advantages but do not fit well with mutable objects. In this paper, we provide a research roadmap to overcome the limitations of the current approaches and to support reactive applications in the OO setting. To establish a solid background for our investigation, we propose a conceptual framework to model the design space of reactive applications and we study the flaws of the existing solutions. Then we highlight how reactive languages have the potential to address those issues and we formulate our research plan.

There was another survey paper done in Wolfgang's group:

Reactive programming has recently gained popularity as a paradigm that is well-suited for developing event-driven and interactive applications. It facilitates the development of such applications by providing abstractions to express time-varying values and automatically managing dependencies between such values. A number of approaches have been recently proposed embedded in various languages such as Haskell, Scheme, JavaScript, Java, .NET, etc. This survey describes and provides a taxonomy of existing reactive programming approaches along six axes: representation of time-varying values, evaluation model, lifting operations, multidirectionality, glitch avoidance, and support for distribution. From this taxonomy, we observe that there are still open challenges in the field of reactive programming. For instance, multidirectionality is supported only by a small number of languages, which do not automatically track dependencies between time-varying values. Similarly, glitch avoidance, which is subtle in reactive programs, cannot be ensured in distributed reactive programs using the current techniques.

The topic is broached briefly in the former and latter respecitively:

In synchronous dataflow languages like Esterel [3] and LUSTRE [7], the program defines a network in which a synchronous signal propagates and triggers the computations in the nodes. These languages are usually compiled to finite state machines and – at the cost of limiting the language expressively – provide guarantees of real-time and memory-bound execution.

and

There have been programming paradigms that have been used to model (real-time) reactive systems. These include synchronous programming, dataflow programming and synchronous dataflow programming. In this section, we give a brief review of those paradigms because there exist surveys [Benveniste et al. 2003]; [Whiting and Pascoe 1994]; [Johnston et al. 2004] that give a full review of the research on the languages in the family of synchronous programming, dataflow programming and synchronous dataflow programming.

It means FRP cannot be

It means FRP cannot be staged or layered compositionally: if you reactively update the FRP program from an earlier stage, you lose important state encapsulated in the FRP program of the later stage.

If FRP is about dependence on time, then how does staging even make sense? Do you have an example?

As a philosophical aside, I don't think that "not compositional" is a very specific complaint. The ways that entities can be composed is domain specific and often concerns are entangled, and "not compositional enough" is only a legitimate complaint so long as there are missing ways to compose things that would actually work. Just trying to make things composable by fiat (c.f. aspect oriented programming) probably isn't a good idea. It's preferable if you can say "FRP doesn't compose in the following way which is clearly desirable".

Staging

Staging can be spatial or temporal.

In case of live programming, it's simply more spatial than temporal. The temporal aspect is expressed through latency (ideally low), while the spatial aspect is expressed through different stages in a pipeline processing program source (e.g. parse, compile, link, install).

As a philosophical aside, I don't think that "not compositional" is a very specific complaint.

Not by itself, no. But "cannot be staged or layered compositionally" is a more specific complaint.

I favor a formal definition of compositional - one with three ingredients: algebraic closure, uniform operators, and inductive properties (including invariant properties). So when I speak of a model being compositional or non-compositional, I am considering formal terms of lacking operators or useful properties or algebraic closure.

Bloom handles this better, I

Bloom handles this better, I think. You can still express (discrete-time) FRP-like abstractions but multiple rules can contribute assertions or retractions to one table, allowing open composition. I was recently leaning towards a more timeless, FRP-like model but ran into exactly the same problems that you and dmbarbour describe.

I'm currently in the process of writing an incremental compiler for Aurora using a bloom-like model. I'll be sure to let you know how it goes.

Nice! I've looked at Bloom

Nice! I've looked at Bloom as well as other datalog variants. I wonder how well it handles retraction, say you have:


A if C
B if A
A if B
C

Now if C goes away at some point in time, are A and B stuck being true? Or must Bloom programs by strictly monotinic? This is quite important, because things tend to go away in an incremental compiler, while cycles are common.

Each fact in bloom is true

Each fact in bloom is true at a specific time. So your rules could be written as:

A@T if C@T
B@T if A@T
A@T if B@T
C@0
C@T+1 if C@T and ¬retract(C)@T ;; this rule added implicitly if C is declared as persistent

If you retract C at some time T=t then at the next time-step A and B are no longer true (unless they could be derived some other way).

This naturally leads to using deductive rules (where head and body have the same time T) to express views on state and inductive rules (where head has time T+1 and body has time T) to express changes to state. Deductives rules may not have any cycles with non-monotonic edges (otherwise the fixpoint is ill-defined) but inductive rules are completely unrestrained (because the passage of time imposes a well-defined result at each time-step).

It reminds me strongly of the ideas in functional relational programming which emphasises separating essential state from derived state.

I *think* that the views can be computed incrementally using standard datalog techniques, but I haven't attempted it yet.

I'm a bit confused. Are

I'm a bit confused. Are deductive rules still forward chained? How would the logic engine wash out the previously computed A and B facts if that was the case? If not, then derived state is simply recomputed on each time step, and how would that be incremental?

The DRed algorithm is able

The DRed algorithm is able to handle deletion of source facts in incremental views. I wasn't totally sure that it would extend to handling the temporal recursion but LogicBlox mention offhand that that is what they use. I've only skimmed over the original DRed paper but my initial impression is that you forward chain the deleted fact to find everything that could be derived from it and then somehow account for facts which could be derived in other ways.

Good, I'll look into it.

Good, I'll look into it. I've done something similar with negative passes before (do a pass where you just remove facts, never add them), but they aren't great for doing work in parallel (the negative pass has to be global) and the language has to be more limited than what I need (disjunction is problematic).

If you can write an incremental compiler in Bloom, the experience alone would be something to read about. Good luck!

Bloom is not concurrent :-(

Unfortunately, Bloom has a fundamental limitation on expressiveness because it is not concurrent with a global state evolves from time T to time T+1.

Take a look at CLF/Celf for

Take a look at CLF/Celf for an alternative approach using linear logic:

http://www.qatar.cmu.edu/iliano/projects/metaCLF/

(admittedly the project webpage is less informative than it could be. i've got some slides on the topic here: http://www.flickr.com/photos/8243148@N08/sets/72157635668011986/)

With a fair bit of hand-waving

The most vexing issue (I won't call it a limitation per se) with FRP for my domain (graphics/games) is that it wants to model "behavior" as something an agent/entity/object is, rather than as something it does. This is consistent with the "F" in FRP, and provides various interesting benefits (or else we wouldn't be so enamored of FRP), but it doesn't appear to be a good match for how many (most?) people think about behavior.

Contrast FRP with something like the script for a play, which is a well proven technology for describing time-varying behavior that coordinates multiple agents. What would an FRP program for Hamlet look like?

One visible symptom of this issue is when complex applications of FRP end up creating types that represent "deltas" or partial states, along with composition operators, and then describe many behaviors in terms of time-varying deltas, rather than time-varying states. This is effectively constructing a "thing an agent does" model of behavior inside of FRP, to make up for its absence.

Good point. In Superglue,

Good point. In Superglue, signals were ports in objects and there was actually sine interesting things that we could do with that (listed in the 2006 ECOOP paper). I'm very interested in Rodney Brooks' behavior oriented programming as used in Kodu, where object behaviors are essentially reflexes based on sensors (condition) and actuators. Unfortunately, it seems TOO biased to gaming....and of course robotics :)

What about Self-Adjusting Computations (SAC)?

One thing that I think is interesting is how little SAC comes up when discussing FRP. I wrote a blog post that discussed this question recently.

I think SAC is an interesting point in the design space, allowing you a great deal of dynamism and flexibility without having to sacrifice performance. The big difference with FRP is you give up the ability to reason about time directly. You can express how one signal depends on another, but you can't depend on the past history of another signal. When you want history dependence in such a world, you need to push it into the system from the outside. And you can push a little farther by building some limited history operators into an SAC system in a way that gives you some more expressiveness, without giving up what makes SAC systems useful.

Clearly, SAC is less good as a way of describing dynamic systems, but it's good for describing a calculation built efficiently on top of a dynamically changing world. And I think it can be used for large complex problems for which typically FRP is either too inefficient or insufficiently dynamic.

Nice post, very useful!The

Nice post, very useful!

The SAC people have never really attempted to explain their work in the context of FRP (Acar doesn't even mention FRP in his 2005 thesis). This makes me wonder if there is any connection at all, or it was just a huge oversight? We see them acknowledge the existence of FRP when they later present an ICFP paper in 2008, but they only compare against very old work.

I'm not sure how one would write a compiler in SAC that could adjust to changes in code, even though, by its description, it SHOULD be suitable to this task. I know my benchmark is cliche, but it has worked well in making me design managed time systems that are more general and not just good for UI.

The omission is on both sides

It's really odd how the FRP and SAC folk have just ignored each other's work. Very odd really.

I think they're write closely related, as my blog post outlines. Several frp systems have even been built on sac cores! Really I think there are a spectrum of semantics with different tradeoffs. That said, sac seems more suitable for a compiler than frp, I think.

Communities

Probably because the different communities are so well separated? Even within FP....

SAC does seem more suitable for a compiler, I'm just not sure how it would be done. Actually, if the compiler is feeding into an editor (which mine always do...), it also becomes a UI problem that should be more FRP.

There is a post on programmers stack exchange that attempts to sort it out.

I never realized that FRP didn't include incremental computation by default, I just always implemented it like it did! I also didn't realize that FRP wasn't very dynamic, but again I never implemented the standard model using switches (just dependency tracing, change propagation); I guess I was doing SAC and not FRP all along with SuperGlue (granted, before Acar's thesis was out as we graduated about the same time) :)

interfaces and purism

Two different sources of confusion are going on.

First, FRP is about semantic interfaces / abstractions. It's a way to extend declarative/functional programming to the temporal domain. The big divide here is on the funny relationship between dynamism / history and what is meant by equational reasoning. What you're considering SAC semantics seems closer to say FrTime, and on the opposite end is Elm and most of the other systems in the Jane St. blog. I do not find it obvious that the equational equivalencies argued for by the Haskell family of systems is actually correct, and I definitely disagree with throwing out the programs that it currently precludes.

Second, FRP is used in performance sensitive domains and implementations can automate key optimizations. Original FRP systems were 'pull'-based data flow, but most modern systems are 'push'-based and incremental. The Haskell-style semantics problem rears its head here: by forcing its particular form of reasoning, either histories have to be kept or a lot of programs should be statically rejected. FrTime does an alternate semantics and thus can accept those programs while still optimizing them under its notion of equational reasoning. Systems like Rx become interesting because I can envision controlling the evaluation style. (In Superconductor for example, because it runs on the GPU, incremental evaluation would often be slower.)

Going back to your top-level question, most advances have been on dealing with restricted semantics, and thus likely uninteresting to you. The exceptions to me are:

(a) Om and your / Jonathan's work. MzTake proposed an aspect-like way of monitoring arbitrary program terms. More recent work enable overriding behavior; in terms of MOP, the dual of get is set.

(b) The work on connecting FRP to temporal logic. I can see these bearing expressive fruit because synthesis of temporal programs from logical specs is a classic area. Current work does not go here yet, however.

Edit: (c) Extensible scheduling: Rx supports defining your own scheduler. This opens up a world of flexibility. The core observation is similar to LINQ that you can attach your own semantics, except more machinery is now in place for doing event processing.

First, FRP is about semantic

First, FRP is about semantic interfaces / abstractions.

I agree, but I never realized this (having read the Fran paper early on) until I read about FrTime. It seems that the pure FRP domain (to distinguish it from FrTime and Flapjax) is juggling a lot of concerns that might not be interesting to someone outside of a pure mindset, while FrTime just laid out very simply and general take aways.

Second, FRP is used in performance sensitive domains and implementations can automate key optimizations.

As I understand it, this is the domain of the (often real-time, dataflow) reactive synchronous and asynchronous languages that precede FRP (Esterel, Lustre, Lucid). FRP seemed to go beyond that into more general domains (animation). It seems like its gone full circle?

MzTake is nice, I wish there was more follow up on scriptable debugging using higher-level abstractions (as opposed to just Bracha mirrors).

I'm not sure what is interesting about Rx. Beyond the simple duality between iterators and observers, it seems like a big step back from data binding. It doesn't seem as high level as FRP or similar systems at all.

SAC is about taking a

SAC is about taking a computation y = f(x), and then efficiently computing y' = f(x') where x' is in some sense close to x. This is about optimization. Semantically it would be perfectly fine to rerun the whole computation for f(x'). This is more or less the same as the signal/behavior subset of FRP: you have a dataflow network, and given some changes in the inputs, efficiently compute the changes in the outputs. You could recompute the whole network, and that would give you the right answer but it would be inefficient. Therefore the implementation of FRP and SAC have many similarities. Both can be implemented by ordering the nodes in the computation so that successors come after predecessors, and then doing a walk in topological order with a priority queue.

One of the main open problems in my view is an algorithm to do this incremental computation that (1) executes in time proportional to the number of nodes that change (2) does not introduce any unnecessary ordering between computation in different nodes (this would enable it to run in parallel/distributed). The priority queue algorithm satisfies (1) but not (2). A simple algorithm that marks potentially changing nodes as 'dirty' in a forward pass, and recomputes them in a backward pass satisfies (2) but not (1), because it may mark nodes dirty that do not actually change.

FRP has events, which SAC does not have. FRP programs with events no longer correspond to a stateless computation y = f(x); they can accumulate data over time. I'm not convinced that this is the right way to extend SAC for GUI programming. It's not sufficient for compositional GUI programming (as has been discussed before on LTU at length), and it has a lot of complications as well. For one the semantics aren't exactly as clean as they look at first sight. If you use the clean timeless semantics then you have space leaks, and otherwise the semantics of operations like fold depend on when the fold got initially called. Secondly it does not actually lead to a pleasant programming style the way programming with signals/behaviors does. It is often unnatural to define each data value as a function of all the things that might change that value. It's often more natural to start with the event sources such as a button widget and define what changes in response to that event firing.

So history is not a part of

So history is not a part of the computation? I've discussed this before offline with some colleagues, and I get the impression that they are doing things like re-execution in addition to (or instead of) computing y' = f(x').

It is often unnatural to define each data value as a function of all the things that might change that value. It's often more natural to start with the event sources such as a button widget and define what changes in response to that event firing.

You will probably like my next paper (or maybe not, since we deviate from FRP in many fundamental ways).

As far as I know, history is

As far as I know, history is not part of the computation. The re-execution and memoization is to efficiently compute f(x').

Any idea when that paper

Any idea when that paper will be available?

The Onward! season is upon

The Onward! season is upon me. Thankfully, the deadline is around Wednesday :)

It's funny that you say

It's funny that you say they're separated communities, since they both largely grew out of CMU (Umut Acar and Conal Eliot were both CMU PhDs). Umut does actually have a student now working on bridging the problem domains:

http://www.cs.cmu.edu/~smuller/research.html

Separated at CMU

Funny! I hadn't realized they were both at CMU. That makes it all the more interesting that people who use one tend to not know a lot about the other.

Glad to hear that work is going on to close the gap!

They are separated by more

They are separated by more than 15 years in their PhD (Conal 1990, Umut 2006). Conal Elliott was also advised by Frank Pfenning, while Umut Acar was advised by Bob Harper and Guy Blolloch. Lineage is quite different even if the school is the same (CMU is a big place).

FRP as Curry-Howard for temporal logic

In my view, the most interesting recent development in FRP has been the realization that you can derive FRP by viewing reactive programs as the constructive content of formulas of linear temporal logic.

This observation was first made independently by Wolfgang Jeltsch and Alan Jeffrey. Subsequently I worked out a type system for it, and showed that this kind of type discipline also lets you statically rule out space and time leaks (see my ICFP 2013 paper Higher Order Reactive Programming without Spacetime Leaks), and then at POPL 2014, Andrew Cave and his coauthors showed (in Fair Reactive Programming)that you could also use this approach to statically encode liveness guarantees as well.

There are two big reasons this is exciting, IMO. First, both the programs you write and their types get a lot simpler with this formulation of FRP. Essentially, you can write the naive recursive stream programs you first thought of, and get a static guarantee about causality and space usage on top of that -- the programs essentially look like Fran programs, or even simpler. Beyond that, we can also program with things other than streams. There are plenty of other useful temporal primitives (such as promises and processes), and it turns out that they all have natural interpretations in temporal logic, too. So this lets us integrate a wide variety of temporal primitives smoothly and without hassle.

Just an update, Neel has

Just an update, Neel has just published a simple implementation in OCaml of the idea from his papers. Instead of static assurances, there's a dynamic contract check to ensure no leaks exist since OCaml doesn't have the requisite type system.

It's incredibly simple, such that I easily implemented it in C# with very few changes.

Events

That's cool, but how do you use events? The typical (imperative) example is e.g mouse trigger. With Rx you can do stuff like (Scala notation)

 val mouseDown = Subject[MouseEvent]
 val mouseUp = Subject[MouseEvent]
 val mouseMove = Subject[MouseEvent]
 val mouseLeave = Subject[MouseEvent]
 val mouseEnter = Subject[MouseEvent]     
 val mouseDrag = (mouseDown followedBy mouseMove) takeUntil mouseUp
 val mouseArmed = for {
    press <- mouseDown
    pressOrReenteredBeforeRelease <- observe(press) or (mouseEnter takeUntil mouseUp)
 } yield pressOrReenteredBeforeRelease

 val mouseDisarmed = for {
    arm <- mouseArmed
    nextLeaveOrRelease <- (mouseLeave or mouseUp) take 1
 } yield nextLeaveOrRelease

 val mouseTrigger = for {
    arm <- mouseArmed
    releaseBeforeLeave <- mouseUp takeUntil mouseLeave
 } yield releaseBeforeLeave
 
 // add a callback
 mouseTrigger.subscribe((clickEvent) => { // do something })

 // now fire some events to simulate mouse input
 mouseEnter.onNext(MouseEvent(...))
 mouseDown.onNext(MouseEvent(...))
 mouseUp.onNext(MouseEvent(...)) // my callback is called
 

How would this work with FrpNext?

Nice. Do you have any ideas

Nice. Do you have any ideas about making this more efficient? When one event fires you have to add a Wait to all events that did not fire simultaneously, and all consumers of those events have to process that Wait. If you have a lot of events in the system it is unfortunate that all those have to do no-ops. Do you see any way to avoid this while keeping the system simple?

Special implementation required for performance

I think the idea is that, like Monads, this provides a way to define the semantics of a reactive value as a pure computation in a way that preserves the operational characteristics so that it can be implemented efficiently in a straightforward way by a compiler. But compiler support is probably required for performance.

I don't think relying on a

I don't think relying on a compiler for efficiency usually works out well, especially for asymptotic improvements that will require global data representation changes.

This isn't a "heroic optimization"

One of the big points of structuring things this way is to preserve the encoding of the desired operational behavior. The compiler is what's generating the code, so not "relying on a compiler for efficiency" isn't really an option. What doesn't work out in practice is ignoring operational semantics and relying on the compiler to work backwards to find an efficient implementation, but that's not the approach here.

That is exactly what's

That is exactly what's happening here, though...how do you expect a compiler to optimize out Wait's? Of course you rely on the compiler to generate the code, and you can even rely on it to do local optimizations, but you have to have the data representation and algorithms efficient.

Hmmm

Well, I would think the idea would be to replace pattern matching on 'Wait' mechanically with some kind of callback. I haven't really thought through the details for Neel's system, which I'd still like to understand better. I'm doing something somewhat similar in my own language and it's possible I'm reading too much of that into the above comments.

That sounds like the

That sounds like the sufficiently smart compiler (Event with Wait/Now is is a user defined data type, so you need to have some kind of magic transformation that converts data types with a Wait-like component to callbacks). The issue is that the system is synchronous reactive programming, which is certainly interesting, but what you need for GUIs, networking, etcetera is asynchronous reactive programming. I think this requires a change in the semantics, perhaps by making a type like Event primitive.

Event with Wait/Now is is a

Event with Wait/Now is is a user defined data type.

Are you sure that's true of Neel's technique and not just this implementation?

Yes.

Yes.

This raw interface is, honestly, not so useful as is, but the slightly miraculous fact is that this is the complete API we need to build all the higher-level abstractions --- like events and streams --- that we need to do real reactive programming.

Not sure

I would expect built-in support for asynchronicity as well, and I got the impression from his previous comments that asynchronous event handlers emerged naturally in his system. Maybe polling works well enough performance-wise for his interests right now.

Do you have a link to those

Do you have a link to those comments? Both the blog post and the paper seem to indicate that the system is completely synchronous, and that asynchrony is built on that with user defined Wait steps.

Well, look here where he's

Well, look here where he's talking about "callback style APIs" (the mathjax is jacked).

Waits

I am queued to read "Clojure Reactive Programming" as soon as I finish O'Reilly's "Clojure Programming" so I can follow it. I picked it because the reviews said it covered a few (three?) reactive programming models. I'm reading my way through pretty quickly.

But at this point I don't understand the model and I don't understand what this talk about processing waits is for.

Another thing I wonder about is tick-based processing. I remember reading that Alan Kay's latest project for shinking a system is based on smalltalk, reactive programming, and constraint based layout...

There was some mention of using tick-based reactive processing and that this solved some problems, I'm curious what problems it solved.

Yes :)

The model in this blog post (and paper) is, as you correctly observe, a completely synchronous model. If you want a gnomic utterance about it, you could say that this style of FRP (based on building temporal data using a next-step type operator and recursive types) is "higher order synchronous dataflow".

The performance cost of this kind of approach is that (a) everything is clocked to the rate of the most quickly-changing datasource, and (b) there can be a lot of "no-change" events (Waits in my example). However, the performance advantage of this approach is that (a) there is generally a lot less buffering you have to do than in more asynchronous strategies, and (b) if the UI is changing very quickly you don't need to waste effort maintaining a DOM/scene-graph/etc. (Graphics programmers sometimes call the style I used "immediate mode GUIs".)

I and my collaborators do have a new draft (which I haven't blogged about yet, but will soon) about how to understand asynchronous callback-y APIs type-theoretically, but the big open question is how to integrate the two styles into a single programming model. (Eg, a webpage might have a subwindow playing a videogame in it, where you want most of the page to be retained mode but the game to be immediate mode.) This is needed because each style is expensive when the other one is cheap.

Appropriateness of Next Next?

Here's one thing I can't figure out about this approach: do we really want to expose stream indexing to programmers? I can make approximate sense of the Next modality as way to discuss updates and events - an event takes us from the current time to the "next" time. So an event splits time into a before and an after.

But when would we ever want programmers writing functions that reference the next next? Hardcoding in the view of "time" as integral steps seems like a bad idea. The ability to write types like 'next next Int' seems like an ugly aspect of this API. Assuming you disagree, can you provide an example of how this can be useful?

Doh

Doh, no answer to this question. I hope it wasn't impertinent. I don't think stream based FRP is common to Neel's approach - can anyone else address this? I'm curious how the stream based FRP approaches think about this.

Streams are not a primitive

I don't know about stream based FRP, but streams are not a primitive construct in his model. The primitive construct is a cell which currently does not have a value but will get one at the next timestep. It's like a future, but all the currently active futures will get their values at the same time at the next timestep. The value that a cell gets could contain yet more cells, which will get their value at the next next timestep, etc.

Yes

Yes, you're right. Poor wording on my part. My question is: why would we want to bake in a notion of discrete timesteps indexed by natural numbers? When would we ever want to build a value of type ••Int?

Synchronous dataflow

Synchronous dataflow languages like Lustre and Esterel have a similar model. According to the wikipedia page they are used in aircraft and nuclear reactor control software.

Trickiness if we don't have time steps, trickiness if we do

Hardcoding in the view of "time" as integral steps seems like a bad idea.

I don't think it's a great idea for most applications, but it is what LTL provides. If you want to use a temporal logic with a more dense timeline than LTL, you might need type-level rational numbers (as in RDP). If you don't think programmers will want to hardcode absolute durations (I don't), you might need hybrid types. These could make the embedding into OCaml trickier.

One place an integer timeline makes sense is in ad hoc game mechanics. A turn-based game has a clear integer timeline, and even players of real-time puzzle games and fighting games are often rewarded for frame-perfect timing. Game mechanics at such a discrete level might not be very fine-tuneable nor very simple in the face of concurrency, but at least they're part of a closed system, so the game might might make up for these issues using other ad hoc tricks.

Trickiness if we don't?

How is there trickiness if you don't have integral steps? You can always add them with an explicit tick event. I'm pretty unfamiliar with temporal logic, but is there an interesting fragment that sticks abandons Next and sticks to the Future modality?

Keeping fixed points admissible

I was saying "trickiness if we don't" to refer to the trickiness of simulating type-level timestamps in OCaml, but it looks like you don't have timestamps in mind, so never mind about that. :)

I'm pretty unfamiliar with temporal logic, but is there an interesting fragment that sticks abandons Next and sticks to the Future modality?

SEP's article on temporal logic gets pretty far talking about Tense Logic before it starts talking about "next," so there might be interesting references there.

For neelk's purposes, I think sticking to the Future modality would make fixed points inadmissible. That is, if we wanted to treat ('a t) as the Future modality, we might want to add this...

val join : ('a t) t -> 'a t

...but then calling (fix join) would give us something to wait for that never comes.

Fixing fix

Good point. The future future is just the future, but that would break a progress invariant that makes his 'fix' work if we interpreted next as future.

Another way to look at it is it's important there are only finitely many events between any two events, since we don't want to ever invoke a continuity principle to get past a limit point. That would seem to motivate having Next in there somewhere. It just doesn't seem like it belongs in the top level API exposed to programmers. The ability to construct a value that will be an integer in two time steps smells fishy to me.

Sending Events = Generic Dependency Injection.

Thinking about how far you can go with FRP, it occurs to me that the equivalence between receiving an event and a method call on an object is well known. I was wondering how well known the equivalence between sending an event and dependency-injection in generic programming is?

Receiving:

class object1 {
    void operator() (event_type event); // receive event
};

Sending:

template <typename send_event>
class object2 {
    send_event send;
    object2(send_event s) : send(s) {}
    void operator() (event_type event) {
        send(event);
    }
};

Route event:

object1 handler1;
object2 handler2(handler1); // handler2 -> handler1
handler2(my_event);

Applied Duality

Meijer seems to be doing this these days with his start-up. But he seems to be pretty adamant about that it isn't FRP, as I got it, whereas from a distance I can't see the real difference with the framework he sells.

Pragmatic

Less worries about synchrony, space leaks, etc. , more about integration into domains (stream processing, ux, various langs). There is rigor, but at the level of types for collections.

Rx is just event streams

Rx is just event streams with composition over that. It doesn't let you peek into the last, it doesn't give you continuous signals. It's not very expressive, you can't implement a working drag adapter with it without writing some custom stateful code.

Frankly, I'm sticking with imperative, it can do everything that FRP can, more than FRP can, with less code and more performance.

re: sticking with imperative

Why do people get lured, lulled, into FRP then? What do we have to avoid in imperative to save it? I mean, I'd like to learn how to get purported benefits of FRP in imperative code, so I have more choices and understand further!

managed time with imperative code

Normally, imperative code is implicitly temporal, i.e. in the sense that (syntactically) code is expressed in a temporal ordering between statements (and often between expressions). Sean's work on managed time (cf. this post and others) pushes real-world time into the logic layer, with concurrent imperative nodes instead being run repeatedly within a logical timestep until a stable state is reached.

This mitigates one of the more problematic aspects of reasoning about imperative expression of behavior in a concurrent system, albeit at cost of making progress and performance more difficult to control and reason about (hence the reference to 'managed' time in the sense of managed space with GC). A lot of that performance could probably be recovered by performing static and dynamic analysis of dataflow dependencies between those nodes.

Because what is out there

Because what is out there for reactive programming is quite bad.

As David mention, when I say sticking to imperative, I have my own thing going on that grew out of my own FRP-inspired research path. The FRP people showed me how to abstract over time, it just isn't limited to functional languages.

Simple solutions like React and immediate mode UIs also help solve parts of the reactive programming problem. But I think Elm has showed us at least, that there is a mainstream market for real FRP.

I don't know that it's fair

I don't know that it's fair to refer to your managed time approach as imperative. Especially without an asterisks.

I think you're right that imperative style is a good interface for reactivity, though. One idea I've been kicking around for a while is a push-to-pull conversion that semantically gathers up state access and event signals into definitions for those things. So the value of a piece of state at any given time is equal to the sum of the deltas applied up to that time. This gives a definition for state that is an honest-to-goodness function of the time index but with a closed set of updates.

Openness can be recovered when needed by connecting to a process that manipulates state. If you want to dump everything into a big sequential process, then you recover the ordinary imperative model, but if you organize things more carefully into islands of state, then you can get something more reactive and less entangled.

I also suspect that reactive is going to be increasingly big in mainstream programming.

Imperative is typically

Imperative is typically thought of being related to the existence of side effects, which is then unfairly coupled to the notion of sequential execution. You can have side effects without sequential execution, which is basically how Glitch/VTM work. But I definitely put it into the imperative basket as that is what style of programming manifests (non-sequential imperative).

One problem with FRP that I've never been able to get over: look at Yampa for how collections must be handled. It is really cool, but at some point you might just want to add or remove something from a collection and call it a day. You can do that in some FRP languages, but this is mostly foreign to the FRP interface, whereas in VTM, it is a "native operation" that is maintained continuously (if x: set.insert(foo) works as expected). I wish FRP researchers would spend more time on the collection problem, however; we need abstraction over space as well as time!

(Incidentally, anyone know how collections are handled in Elm?)

It sounds like you are trying to discretize state updates. I'm interested in doing something like this in the context of DNN training, but with an emphasis on incremental recomputation of a training run (if that is possible).

ML

ML has side effects, is it therefore imperative? I thought imperative means "a command" (time to re-watch the Monte Python "Romans go home" sketch). Therefore an imperative language is one that consists of a sequence of commands, rather than an expression:
An expression:

A = B + C * D

Commands

Multiply C and D!
Add B to the answer!
Store the answer in A!

ML of course supports

ML of course supports imperative programming, as does Scheme, as does even Haskell with monads.

Imperative programming is a style, most languages support it even if they aren't considered imperative languages otherwise.

In 'C' ';' is bind

In the same way you can write a functional program in 'C'. You can even consider 'C' to be in the IO monad with semicolon being bind, and unit is return.

Of course I think most people when referring to a language as supporting feature X mean it is easy and natural to do X in the language, not that it is possible. Hence I would not call 'C' a functional language, nor call ML and Haskell imperative

For me imperative is about a sequence of commands, side-effects are about non-local state changes. I think they are separate things. I think you could design an imperative language with no side effects, and a functional language with side effects.

For example consider a functional language with an "exchange" function that has storage for a single value that returns the value in the store, and replaces it with the argument. As in:

let f = exchange 3 in ...

I guess you're looking at it

I guess you're looking at it as "imperative" because it consists of a set of "do this" instructions, just not a sequence of such instructions. It seems fair. I guess around here you can expect people to be familiar with your approach, but I'd include an asterisks in polite company.

A google search for VTM FRP turns up Vamprire, the Masquarade, as an example of a fantasy role playing game. I'm guessing that's not what you meant.

It looks like Elm uses first class Identities to track where the elements have gone. See here.

I'm sticking with a discrete update model. Is that all you mean by 'discretize state updates'? You lost me at the jump to DNNs.

VTM is what I'm calling

VTM is what I'm calling Glitch now. It stands for Virtual Time Machine, of which there is a previous project from the 80s by Fujimoto related to timewarp.

There seems to be some connection between reactive programming with state and neural network training. At least, I find lots of similarities. Its just that "t" in that world is part of a long training process.

Bayesian Networks

Neural networks store information spatially, but you can extract the data in a trained network and express the same information as a Bayesian decision tree. Is a FRP program a Bayesian model? I guess you could imagine a FRP pinball machine might be?