Is Rx FRP?

There is some talk on Hackernews (see also this) that functional reactive programming should be defined as programming that is reactive and functional. Such "FRP" would then include event streaming dataflow programming systems like Rx. Is the dilution in terminology worth it and I'm just being pedantic, or is there some genuine confusion in hacker land?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Paradigm dilution

I would say this is an example of 'genuine confusion in hackerland', though the author in this case did make a later correction (via comment).

FRP in its original formulation is purely functional - i.e. determinism, referential transparency, equational reasoning, and all that. FRP is essentially a deterministic concurrency model, i.e. any two signals or event streams are inherently concurrent, and may be processed concurrently. FRP can model stateful processes via accumulator or integral constructs (similar to 'scan' in the tutorial).

These hybrid functional-imperative approaches to reactive programming, such as Rx, tend in practice to result in very different programming styles and architectures than pure FRP systems. On one hand, there are easy escape hatches to gather ad-hoc input or emit output. On the other, we lose a lot of easy reasoning about concurrent behavior. It is not clear to me that the dilution of terminology serves anyone. One could argue a reasonable case for the reactive functional/imperative hybrid without hijacking the FRP brand.

The author's correction

The author's correction isn't that meaningful if the community has already bought it and doesn't care. We will now have a plethora of functional reactive programming article on something that isn't FRP. Yeh. The FRP community is probably going to have to become more visible (either by being more vocal, or through a release with mass adoption), or they are going to have to settle for dilution (and/or find a new name).

It took me a few years to understand what Conal was saying about FRP, but I think I get it now. Real continuous time continues to blow my mind, why don't we have real continuous space to go along with it? I still think fake continuous time (i.e. push-based solutions) should qualify as FRP (Flapjax is FRP as far as I'm concerned), but removing continuous behavior as an abstraction is going a bit too far.

continuous space, push-based

Much of Conal's work on Fran (and its non-reactive predecessor, Pan) utilized continuous-space, e.g. you'd model an image as (ℝ*ℝ)→Color (or more generally replacing Color by a monoid). The (ℝ*ℝ) field can be understood as a continuous space. There are a lot of good arguments for continuous space models, i.e. as opposed to working with a raster grid. But most of this is orthogonal to the whole process and concurrency model; the model of time is more relevant to general purpose programming.

push-based solutions should qualify as FRP

Pushing at the semantic layer - e.g. pushing an image to the screen, or causing an alert, or registering a service - is not something I'd consider FRP. Rather, the inability to push data might be considered a distinguishing weakness of FRP. Sometimes, especially for non-invasive extension of a system or integration in open systems, pushing is the right answer.

Pushing at the runtime or implementation layer, i.e. for incremental update, is orthogonal to FRP. Much like caching, or parallelism. Granted, if you push update notifications, you might also need to do a little rewinding to accommodate a late-arriving updates.

I meant: those languages

I meant: those languages that leverage notifications at discrete time points rather than sampling at discrete time points. Semantically, they are quite similar, with differences in expressiveness that most programmers won't consider until they reach an advanced level of enlightenment. In that sense, many won't be able to tell the different between real and fake FRP until they try something like integration.

Isn't rasterizing based on sampling? Perhaps one could describe an FRP runtime as a rasterizer for time. Most programs still need support for discrete events though.

FRP events; rasterization of time

Most programs still need support for discrete events though.

Most FRP models have support for events, e.g. a time-ordered stream of (Time,Value) pairs to model flexible asynchronous behavior.

I don't believe discrete events are necessary to solve any problems except those we created by using discrete events in the first place.

FRP runtime as a rasterizer for time

I'd consider a display device to be a rasterizer for time, i.e. each 'frame' as a discrete point in logical time. Some audio output models are also rasterizers, e.g. requiring a wave-form at a 44kHz.

I suppose you could use this notion to implement FRP runtimes.

I believe sampling is

I believe sampling is already used in pull-based FRP implementations, right?

Sampling is certainly used

Sampling is certainly used at the edges, e.g. to pull out video frames or sound or control outputs for a robot.

I don't believe discrete

I don't believe discrete events are necessary to solve any problems except those we created by using discrete events in the first place.

I once thought the same, but I couldn't find a way around users wanting to push buttons. But I prefer to abstract over events if possible, rather than turn EVERYTHING into an event stream as in Rx.

Pushing buttons

Pushing buttons is an excellent place for a state signal abstraction - i.e. at any given time, a button is up or down. And there are approaches to accumulating characters in a text buffer or similar without event based communication.

Pushing buttons is the most common example of "why we need events", but it isn't a very convincing one. Eventful buttons don't generalize well to joysticks, toggles, knobs, analog buttons, gesture input, and other user input. Signal models generalize very nicely to user input. Including buttons.

Better arguments for events include integration with operating systems and message-based services - which tend to fall in the category of 'problems created by using discrete events in the first place'. If events seem like the best or only effective way to model buttons, that's probably the fault of an API somewhere rather than intrinsic to the nature of buttons.

Most button events that we

Most button events that we care about are discrete and don't represent continuous states (e.g. toggle vs. normal buttons, key pressed vs. key up and down).

I definitely need to handle key pressed events when plugging into my incremental lexer, Glitch doesn't help out very much at all.

the events we care about

Most button events that we care about are discrete and don't represent continuous states

You've been using the word 'need'; in any formal sense, you don't need events. The information you care about is available with continuous states, and not difficult to access with an appropriate state model (e.g. reactive state transition or term rewriting, or various temporal logics). Whether events seem more convenient for your use case is a separate issue.

In a general sense, event models can be highly specialized - i.e. for every state model, there are many eventful update models, and thus more likely one aligned with what you care about. This is both a strength and a weakness of events relative to continuous state.

I disagree: we can abstract

I disagree: we can abstract over events in certain cases (e.g. as input driving state machines), but these abstractions can never solve what we want to do with events completely. A discrete event abstraction then becomes a requirement at some level of generalizable power.

abstracting over events?

Events don't have an inherent existence such that we're fundamentally forced to 'abstract over events'. Rather, events are themselves an abstraction over observation and influence, representing state changes and updates that we presumably care about.

If you don't assume event-based communications in the first place, e.g. in popular APIs or toy problems formulated in terms of eventful models, then you won't need to abstract over them. Apart from integration with unnecessarily event-based systems, I'm not aware of any real world problem that truly requires event-based communications to solve completely. Really. Eventless abstractions are Turing powerful, and work well from edge to edge - sensors, state, actuators, UI, and even distributed systems.

Granted, in some abstract, implicit sense you could still choose to perceive important state transitions as 'events'. But choosing to infer such as events isn't necessary. We don't need to express programs, services, and systems in terms of event-based communications, e.g. event streams, messages, event-driven state manipulation.

Though, if you have a solid(*) counter example, I'd love to see it.

(*) not amounting to "I only know eventful idioms" or "I've assumed I need events"

Button pushing

When a button is pushed: open a window, increment a counter, send an email, undo an action, do one step of physical simulation.

Events really are fundamental parts of how we reason about time, even perhaps in how time can be understood at all. If even physicists can't hide them, then I don't think we can either.

State change

To clarify: I define 'event' in a broad but specific way, as an explicit representation of instantaneous change. Thus, message-based communications are eventful by nature, as are event streams. Imperative state manipulation is also very eventful, with each assignment statement qualifying as an event.

But state change is not inherently eventful. We can model state change - e.g. increment counters, open windows, compute physical simulations - using only event-free abstractions and expressions. To manipulate state without events, we need non-imperative state models. Those exist, though they're somewhat esoteric by modern programming conventions.

Sending an e-mail is eventful by nature. If we integrate with an e-mail system, we'll need at some point to model events. OTOH, if we took the underlying problem - asynchronous communication between agents (including humans) in a distributed system - there are event-free ways to solve it.

Events, at least as I define them, aren't essential for reasoning about time.

Our definitions differ: what

Our definitions differ: what you refer to as event I refer to as an action. My events are just points on the line of time, whatever happens because of the event (instantaneous change) has to hook them.

Imperative state manipulation doesn't have to be eventful actually, I hope I show that in Glitch. You can assign to a cell in both a continuous or discrete context. The only difference being that in the discrete context, you have access to the past state configuration in defining the future state configuration, and the cell assignment will persist until superseded by an assignment from another event.

Avoiding events in control flow essentially means abstracting over time, exchanging one set of problems for another set of problems. The event still exists, its just been converted into data in a timeless execution model. They are not only esoteric by modern programming conventions, they are also hella hard to deal with (we can only take so much indirection before our heads explode).

Alternatively, you can try to have the computer infer all event processing needed for your goals. E.g. you can say "I need to know how many times this button was pushed" and the computer could set up the logic to ensure that the question can be answered. We are pretty far away from being able to realize such higher level declarative programming models, however, and you have to move up to higher levels to get rid of the eventful language ("how many times").

what you refer to as event I

what you refer to as event I refer to as an action

Events don't imply action; they're just representations of instantaneous change, and might represent observations as easily as commands. Processing events might lead to actions.

Avoiding events in control flow essentially means abstracting over time, exchanging one set of problems for another set of problems.

True. We should pick our problems carefully.

Alternatively, you can try to have the computer infer all event processing needed for your goals.

Well, there are a lot of other alternatives. You're skipping some very fertile middle ground

Events don't imply action;

Events don't imply action; they're just representations of instantaneous change, and might represent observations as easily as commands. Processing events might lead to actions.

I'm focusing on the fact that events occur at specific points in space time. Doing things at those points is basically "reacting" to the event, and is actually what reactive programming is in general.

Well, there are a lot of other alternatives. You're skipping some very fertile middle ground

It is a gradient like declarative programming really is in general. In general, abstracting over events, when possible, represents a huge win in productivity; but sometimes you need to write code that is exposed to them.

avoiding is not 'abstracting over'

I don't suggest abstracting over events. I suggest avoiding them entirely. Instead of computing over representations of state-change (events), compute over representations of state (such as signals). Instead of using events to manipulate state, use state to manipulate state. To the extent you avoid modeling events in the first place, you won't need to abstract over them.

For robustness and extensibility purposes, it can also be useful that events be explicitly represented using state and signals, e.g. using a timeline or event log. But it's still better to avoid them.

events occur at specific points in space time

But not everything that occurs at a specific point in space time is an event, at least not in a manner relevant to the programming experience. To meaningfully have an event, you must additionally represent the occurrence - e.g. as a value, message, or object.

"reacting" to the event [..] is actually what reactive programming is in general

"Reacting to events" is plainly a specialization of reactive programming. "Reacting to (concept)" would be the generalization. Obviously. :)

I think this is the core of

I think this is the core of our disagreement.

In SuperGlue (ECOOP '06) I found a great improvement in productivity when events weren't used. As long as the logic was continuous, everything was great and dandy. But I also found there was too many things that couldn't be expressed. Event handling simply couldn't be avoided without severe expressiveness consequences.

Put it another way: glitch includes support for event handling, but the value of Glitch is in not using it as much as possible (like using signals rather than event streams). Glitch isn't a great system for dealing with events, but its necessary to make the system "complete." It is also why I think systems like Rx are regressive: they put events back in the spotlight when we really should be hiding them.

But not everything that occurs at a specific point in space time is an event, at least not in a manner relevant to the programming experience. To meaningfully have an event, you must additionally represent the occurrence - e.g. as a value, message, object, thread.

For my definition of events, the point in time is basically the event, in that there is a one-to-one mapping (we can assume that anything that is done at the same time is done because of the same event; we can enforce this with time stamps). Perhaps I should just use "point in time", but they seem synonymous.

"Reacting to events" is plainly a specialization of reactive programming. "Reacting to (concept)" would be the generalization. Obviously. :)

Well, we are actually reacting to the changes that are results of the event. Some event must have initiated the change (unless we are talking about real FRP, where change could be driven by a continuous function like a sin wave).

Indeed

Indeed, we are born by one event, and we die by another. Modeling life and death as the states of a system misses everything that matters about it.

'Poetic' is not a property

'Poetic' is not a property by which we judge the quality of programming systems.

OOP is quite naturalistic,

OOP is quite naturalistic, actually, which seems to be its advantage.

OOP is misleading

OOP is misleading. It does offer some pretense of being natural or intuitive, which makes it attractive. But, even at moderate scales, you'll be needing not-so-natural discipline and design patterns to make effective use of OOP.

In any case, nobody has ever made a strong case that features we describe as 'natural' are inherently 'good for programming'. That just gets assumed.

Natural features (that we

Natural features (that we are already familiar with via language) are not good for programming, just good for programmers. We sometimes forget that a person is involved in the process and focus on just the code.

Poetry is an example.

computer herbalism

You're just asserting this to be true. That isn't very scientific or logical. I don't forget that programming is a human activity. But there are many things good for programmers that don't exactly show up in nature (including computers).

The 'OOP is naturalistic' argument is the Computer Science analog to herbalism and naturopathy.

Human language, thought, and

Human language, thought, and perception is a very human friendly basis for programming. Its not herbalism and naturapathy; OOP isn't so popular because of good marketing, but people just like it better. That FP meets so much resistance is because most of us just don't have such a great relationship with math. You can argue that humans are just being selfish here, and avoiding something that is better for something that is easier to grock, that is a completely valid argument. But comparing OOP naturalism to pseudoscience isn't useful (if your goal is to influence people).

The reason I can't grock your work well...it is just based on all these concepts that I can't really compare to real things and processes. What is term rewriting in my day to day life?

Human language, thought, and

Human language, thought, and perception is a very human friendly basis for programming.

How would you prove this claim? What do you mean by 'human friendly' in an objective sense? (Does your definition account for effectiveness? for social and teamwork effects?) Why should I give your claim any more credibility than: "Human motion - e.g. running, jumping, eating, and fornicating - is a very human friendly basis for space flight."

Repeating your position isn't convincing to me.

The reason I can't grock your work well...it is just based on all these concepts that I can't really compare to real things and processes.

Associating new knowledge with familiar concepts can be useful for learning or teaching. Unfortunately, you seem to have perverted this, twisting it around to say that familiar concepts are inherently effective bases for new knowledge.

What is term rewriting in my day to day life?

Simple stateful rules, like taking out the trash when the can is full.

In the context of reactive term rewriting, I'd not compute an event like "it's time to take out the trash". But I might temporarily change the rules from "take out the trash when the can is full" to "take out the trash when the can is not empty". And I could potentially view the state of the can as a continuous signal.

As always: Somewhere in the middle

As always, the "correct" answer probably lies somewhere in the middle.

I agree with dmbarbour that everything *can* be modeled as signals, and it's not even that much of a stretch.

However, I also agree with Sean McDirmid that some things are more naturally modeled as events.

The fact is, events have weaknesses that are addressed by signals, however, sometimes those weaknesses are fundamental to domain in either the programmer or the user's mind. I made this same argument in a comment on David's excellent Why Not Events post, but will rehash it here.

Take Photoshop for example. There are many discrete, instantaneous operations that incorporate the existing application state in to the user's mental model. For example, there a "resize image" operation that can be parameterized by an absolute or relative size. There's also a "do that again" command, which re-issues the most recent command on the undo-stack. "resize by 50%", then "do that again" will result in an image 25% of the original size.

Now, imagine you had to use Photoshop by proxy: You're dictating instructions to somebody else who is driving the keyboard and mouse. Barring any visual scan time, it's pretty effective to tell your proxy, "please resize the image by 50%". As a discreet action, it's very easy to reify and transmit the message. However, now consider trying to paint some strokes via proxy. It'd be virtually impossible. There's just too much information to transmit verbally. You'd have to grab ahold of your proxy's elbow and move their wrist around with continuous effort.

What happens if you ask Photoshop to "do that again" for a given paintbrush stroke? Photoshop arbitrarily delimits an individual mouse down/drag/up time period as an "event" for the sake of "do that again". In order to accomplish "undo" you need to discreetize the continue mouse signal.

In short, you need both events and signals, depending on the context. Rather than trumpeting one idea to the exclusion of the other, we should be searching for a way to unify the concepts productively. By identifying where each approach is respectively strong or weak, we will be better equipped to address a wider range of problems.

So you're saying

So you're saying AND is more natural than OR?

Natural language has imperative and declarative moods, conjunctions, disjunctions, continuations, modal logic, etc. All would seem to have their place. What seems unnatural to me is the apparent need to unconditionally reject any or all of these.

I disagree

Admittedly, in most cases what you want from an input stream representing (for example) a mouse button are the points in time at which the physical state of the button changed. However, consider click / drag. This involves a synthetic event, "mouse button has been held down long enough for mouse movement to be considered as dragging", and implementation of those synthetic events is horrible.

Continuous states are just as usable an abstraction for "physical" state changes as events are, and handle synthetic events cleanly. It seems a cleaner and more generic abstraction overall, at least to me.

At a high level, click and

At a high level, click and drag can appear as a continuous abstraction, but the abstraction must be implemented at some level, and those details will inevitably involve events.

and those details will

and those details will inevitably involve events

Those details do not need to involve eventful communication, nor event-driven computation, nor any explicit abstraction of events. They do involve state. If any model for discrete state change is implicitly an 'event' in your mind, so be it, you may have a pointless.

If any model for discrete

If any model for discrete state change is implicitly an 'event' in your mind, so be it, you may have a pointless.

Actually, they involve discrete state changes; you can definitely have state in an event-free program. Someone has to program those, even if higher-level abstractions can be provided up the abstraction tower. Its not clear, and I'm pessimistic, if there is a finite set of higher-level abstractions that can hide all event processing and still allow us to express everything that we want to express.

discrete state change

discrete state changes [..] Someone has to program those

Sure. There are ways to program discrete state changes without using events.

And regarding your pessimism: we can express everything we want to express using a simple non-imperative state model that I call reactive term rewriting. In this model, instead of explicitly updating state, you model a time-varying set of rewrite rules for a specific state resource, and simply converge on a (potentially) new state (i.e. computing a fixpoint) after every change in the rewrite rules set. There are other non-imperative state models with similar expressiveness; e.g. using Dedalus Datalog as an alternative to term rewriting.

If anything, these models are too expressive for casual use. But it's nice to know they're available as fallback resources when developing programs in an event-free language.

I'd like to know more, I

I'd like to know more, I can't really argue with what I don't understand very well :) You really need to write an Onward paper or do something to get your ideas out there in a focused/polished format. I'm happy to help if you ever want to try.

dup (ignore)

ignore

Rx has event streams, which

Rx has event streams, which FRP has too. That's about where the similarity ends. A core construct of FRP are time varying values (called behaviors/signals). Rx doesn't have that. Rx also isn't glitch free, i.e. the operators do not have a deterministic time semantics. That's another core principle of FRP that Rx doesn't have.

IMO, if you can't do A = B + C where the values vary in time, then you aren't FRP.

For the first point, Rx has

For the first point, Rx provides replay buffers. Different-but-similar.

The glitchy behavior gets confusing, but more concerning for us, the default lazy semantics has been leading to coordination bugs and memory, speed bugs.

Event streams are FRP, too

A behavior is a stream:

  B(X) ≜ να. X × •α

This is a constructive version of the "always" modality from temporal logic. It says it can give you a value of (evidence for) type X on the current step, and then give you another behavior you can use on the next timestep, thereby giving you an X on every tick. There's also a second constructive version of always, which I write â–¡X. This says that you can produce a value, which is always a proof of X. Classically, these notions are equivalent, but they separate constructively.

An event is a promise – it's a value that will potentially become available at some point in the future. There are two ways of modelling it:

  E(X) ≜ μα. X + •α
  ◇(X) ≜ □(X → p) → p

The first is a computation that either gives you an X, or tells you to check back on the next time step. This corresponds to a promise which you can test for completion — that is, this is a promise API that lets clients ask if the promise is finished, or still running.

The second type of promise is a callback-style API. It asks for a callback function that can be invoked at any time in the future to return an answer, and then returns an answer. (The answer type is the type of computations which respect the event loop's invariants.) In this case, you can't ask whether a callback has fired yet or not. Both of these correspond to encodings of "eventually" in temporal logic. Classically, they're equivalent, but again constructively they're not.

An event stream is something that periodically gives you values. You can encode it as:

   EventStream(X) ≜ να. ◇(X × α)

This says you can wait awhile, and then you'll get a value, and the rest of the event stream, and so on ad infinitum.

Because all of these are very natural constructions in temporal logic, IMO they are all natural parts of FRP. As usual, the point isn't to forbid programmers from doing things, but rather to help structure their doings.

Fewer weasel words

The second type of promise is a callback-style API.

It sounds like you're not hedging as much as the last time you posted about this. Does that mean you've gotten the details sorted out? Any chance they'll appear in a paper soon?

That's very interesting, but

That's very interesting, but also quite different than FRP in the sense of FrTime/Flapjax/Froc/etc. Lets call those 'pragmatist FRP'. It looks much more like synchronous dataflow than pragmatist FRP. One of the main features of pragmatist FRP is that the time it takes to re-evaluate as a response to an event is proportional to the number of nodes that need changing. The main practical application is GUI programming. The GUI toolkit calls one of your callbacks, which triggers an event in the dataflow network, then the FRP update algorithm does its work, and that eventually results in a set of updates to the GUI widget tree. Do you have a way to handle that efficiently in this framework?

Yes, you could call this

Yes, you could call this style higher-order synchronous dataflow, if you like. However, I originally used the "pragmatist" implementation strategy (i.e., using a change-propagation framework) in my first paper on FRP (Ultrametric Semantics of Reactive Programs), but I don't bother with it any more.

The first reason is that doing change propagation is really hard on the garbage collector, since each node needs to track both what it reads, and what reads it. This means that either (a) you end up with a big cyclic graph and nothing can get GC'd, or (b) you use a lot of weak references and add a lot of overhead to each collection.

Furthermore, in practice I find limited benefits to doing it, because typically you end up in one of three cases -- (a), either the recomputation is trivial, or (b) you have a fresh expensive computation, or (c) a simple test in the recursive construction of a stream can detect whether you need to recompute or not. In any of these cases, change propagation just adds overhead without benefit. It's only when you are doing redundant expensive recomputations which are both statically hard to predict and easy to detect dynamically that full-blown (i.e., anything more than memoization) change propagation helps -- and that doesn't happen much.

IMO the real reason FRP libraries end up deploying this kind of machinery anyway is primarily error handling -- you want to detect glitches, cyclic dependencies, and so on and report sensible errors to the user. But typing can rule those out statically, and so I think none of that machinery is actually necessary.

It would still be interesting to look at is to completely decouple the reactive stuff from change propagation (like in self-adjusting computation), but to support both. Then we could let programmers tell the computer precisely the places where they want change propagation to happen, and have it occur nowhere else.

I don't think it's easy to

I don't think it's easy to avoid expensive recomputation with the simple model. For example look at mapping a function over a behavior. How do you avoid recomputation of the function if the behavior's value didn't change? You could do it by changing the model of course (e.g. represent a behavior by a stream of changes rather than a stream of values at any point in time), but with the simple definition of behaviors I don't think it's possible.

However, if I understand the model correctly, there is a more serious and harder to solve source of inefficiency. The thing is, even if recomputation of each individual node was completely free, just inspecting the whole data flow graph would be too expensive. Suppose you have an application with 100 text boxes, and each text value represented as a behavior. If the value of one text box is changed, you need to advance all 100 streams by one element even though only one is changed. Every node will be touched on each event that the application receives. Consider for example that an event coming from a timer (e.g. to control an animation, which may run every few milliseconds) will also cause a time tick. With change propagation, the 99 other behaviors wouldn't even be touched by the change propagation algorithm.

Is that correct? Do you have a plan to make that more efficient?

I don't think it's easy to

I don't think it's easy to avoid expensive recomputation with the simple model. For example look at mapping a function over a behavior. How do you avoid recomputation of the function if the behavior's value didn't change? You could do it by changing the model of course (e.g. represent a behavior by a stream of changes rather than a stream of values at any point in time), but with the simple definition of behaviors I don't think it's possible.

You can define a new version of map, if you like. For simplicity, I'll assume that the elements of streams/behaviors are "temporally portable" and then suppress the annotations you need to make it easy to check.

   map' f (x :: xs) x_old v_old = 
      if x = x_old then
        v_old :: (map f' xs x_old v_old)
      else 
        let v = f x in
        v :: (map' f xs x v)

   map f (x :: xs) = 
     let v = f x in
      v :: map' f xs x v 

Now you have a version of map which only calls f when it gets a new value on the next tick. Note that this is the same function you'd write if you wanted to write a map function in ML or Haskell which avoided calling f on repeated runs. Just as in regular FP, it turns out this is only occasionally useful, though you can still define it if you want it.

However, if I understand the model correctly, there is a more serious and harder to solve source of inefficiency. The thing is, even if recomputation of each individual node was completely free, just inspecting the whole data flow graph would be too expensive.

Suppose you have an application with 100 text boxes, and each text value represented as a behavior. If the value of one text box is changed, you need to advance all 100 streams by one element even though only one is changed. [...] Is that correct? Do you have a plan to make that more efficient?

You're entirely right about how synchronous stream programming works. However, for things like this, the idea is that you're not restricted to streams -- you can use events/callbacks instead (what I called the diamond type above), or even asynchronous streams, or whatever you need. You've still got a pretty logical formalism behind it and a sane operational/implementation story.

For that map you need an

For that map you need an equality check, so it's a little different. With pragmatic FRP you know a priori that a value doesn't change simply because nothing caused it to change, so no check is necessary to determine that. You could model that in your system as B(X) ≜ να. Option X × •α where None indicates that the value didn't change, and Some x indicates that it changed to x.

However, for things like this, the idea is that you're not restricted to streams -- you can use events/callbacks instead (what I called the diamond type above), or even asynchronous streams, or whatever you need. You've still got a pretty logical formalism behind it and a sane operational/implementation story.

I have a hard time understanding how it's different than normal callbacks and what exactly the advantages are. Is there a description of how events/callbacks would solve the efficiency problem? How would you build a GUI application on this where the time to react to an input event it proportional to the changes rather than to the entire dataflow network, and what are the advantages that the implementation story gives you? Or maybe you didn't intend it to be applicable to GUI programming, but to some other area?

For that map you need an

For that map you need an equality check, so it's a little different. With pragmatic FRP you know a priori that a value doesn't change simply because nothing caused it to change, so no check is necessary to determine that. You could model that in your system as B(X) ≜ να. Option X × •α where None indicates that the value didn't change, and Some x indicates that it changed to x.

Ah, I misunderstood what you were after. In this case, you probably don't want to pass change signals (i.e., use an option type), because that still involves waking up on every tick. If you want a stream which is completely quiet until something happens, then you want the event stream type να. ◇(X × •α). This gives you a sequence of X's at unpredictable intervals, with a guarantee that you get at most one A on each tick. Since diamond is implemented as a callback, nothing happens until each event does.

I have a hard time understanding how it's different than normal callbacks and what exactly the advantages are. Is there a description of how events/callbacks would solve the efficiency problem? How would you build a GUI application on this where the time to react to an input event it proportional to the changes rather than to the entire dataflow network, and what are the advantages that the implementation story gives you? Or maybe you didn't intend it to be applicable to GUI programming, but to some other area?

You're having a hard time seeing the difference because there isn't one, at least from a pure implementation view. I really am describing nothiing but plain callbacks (i.e., the observer pattern). The difference lies not in how things are implemented, but in how the type system controls their usage. The type system I have in mind is structured so that (a) you can do a lot of equational reasoning on programs, (b) the temporal behavior of your program can be read off the types, and (c) callbacks (i.e., an asynchronous style of programming) integrate well with streams (a synchronous style).

I'm not proposing any new implementation strategies at all. Instead, I'm suggesting that if we codify engineering lore about how to build interactive programs, we end up with a functional language whose type system is temporal logic, but whose implementation looks like the sort of programs we have already been writing. It's sort of the opposite of Moliere's Monsieur Jourdain, who was surprised to learn he had spoken in prose without realizing it. Here, it's more like we've been speaking in rhymed trochaic tetrameter without realizing it --- a rather more startling coincidence...!

Are you sure?

For graphics regardless of FRP it's required to ultimately detect changes to vertex data and shading attributes in order to avoid unnecessary drawing - not to mention frustum culling, occlusion culling, state sorting, etc. Regenerating the frame buffer on every tick will quickly heat up your machine and drain your battery.

Change propagation frameworks have been the norm in this area, e.g the Maya Dependency Graph

For graphics regardless of

For graphics regardless of FRP it's required [..] to avoid unnecessary drawing

Depends on the graphics application. If your graphics involve video, time-series sensor inputs, or otherwise animated elements, you'll be redrawing most frames regardless. For a typical graphical UI, which sits still unless manipulated by the user, to avoid redraw is more useful.

I think there's a good case to be made, to decouple change propagation from the reactive programming model.

All those applications

All those applications exhibit frame coherence, and so there is still a lot of optimization that can be and is taken.

FRP can be implemented on a DOM to reap these optimizations, or it can stutter along without.

I guess you're right, but

Such apps would seem to have limited viability on the most prevalent platforms i.e. mobile. E.g if you make the mistake of allowing the iPhone Maps app to draw to your screen, your phone will likely be dead before you reach your destination.

time series

At least for our tool for visually exploring ~1MM metrics, there is a *lot* of recomputation :)

Same for computational video, though that's a lot harder to detect. "This region of the screen had no cats, and still has no cats.."

Amen about the decoupling.. until you get into significant optimization. For interesting optimizations (asymptotic, magnitudinal), we want to make strong assumptions. E.g., current wave of web frameworks assume pure views in the sense of sean's recent work to get animateframe goodness. My work, e.g., superconductor, makes much stronger ones.

Natura non facit saltus

The proper term for this property is frame coherence, which is a form of temporal coherence. Here is a nice overview of all the coherences that we can and should exploit!

wonderful find

Thanks!

practice

Furthermore, in practice I find limited benefits to doing it,

Neel, out of curiosity, what applications are you considering practice? I've followed the examples in your papers and am wondering what is guiding you beyond those.

Edit: For context, my coding has been clientside GUI, clientside application, distributed applications, and vicariously, OS-level.

GUIs, mainly...

I'm fascinated by the fact that the go-to design for GUIs is still model-view-controller, even though the design is older than I am. You could sort of understand that for something at the bottom of the software stack, since changing the base of the stack breaks everything above it, but GUIs are the very top and are therefore the APIs most exposed to the cruel winds of software fashion. So the survival of MVC means that it must be getting something really right, and I'd like to know what. I especially want to know why certain critical pieces of GUI architecture violate all conventional wisdom and get away with it (e.g., the DOM or scene graph is a concurrently-modified globally-visible shared mutable data structure, which sounds like it ought to be hell on Earth, honestly).

For reasons of research focus, I'm less interested in things where no-fooling true concurrency is needed (like distributed systems), because I'd like to see how far you can push a deterministic programming model. However, I am interested in "logically" concurrent programs (like event loops) and concurrent systems that are constrained to be deterministic (like pi-calculus with linear or session types).

Model view controller is

Model view controller is useful sometimes, but I wouldn't call it the "go to design" for GUIs. Most UIs are fairly standard form-based affairs, especially in retained systems, where the overhead of MVC would be unwelcome. In databinding systems (which are close in spirit to continuous FRP), you don't even bother separating out your model state; just bind directly to that slider value!

It's all FRP

Both callback-based and synchronous tick-based implementations of reactive programming are FRP; they amount to proof terms for slightly different fragments of LTL. It also doesn't matter if time is a real number (continuous) or a natural number (discrete), either, since Kamp's theorem says that those are exactly the two correct[*] models of time for temporal logic.

[*] It actually says LTL is expressively complete with respect to the first-order monadic logic of order if the model of time is either the naturals or the reals. But it's punchier to say that the naturals and the reals are the two correct models of time. :)

exactly two?

I understand that Kamp's theorem was demonstrated for two canonical models of time, but I'd be surprised to find proof that it is exclusive to just those two models. Is there an argument somewhere that Kamp's theorem is suddenly invalid for, say, a rational model of continuous time?

It seems there is some work extending Kamp's theorem to layered temporal models (Montanari, 2002). I suspect it can be extended in other directions.

Expressive completeness fails for rationals

The rationals are dense, but they aren't Dedekind complete. (E.g., the limit of the sequence of decimal approximations of sqrt(2) is the square root of 2, which isn't a rational.)

Another way of stating Kamp's theorem is that for any first-order monadic formula with one free variable, there is a temporal logic formula (using since and until) which is equivalent over Dedekind complete chains. So Kamp's theorem fails for rational time.

Kamp's theorem

Thanks. This helped clarify it for me.