## A functional-programming view of time

See fig. 3.38 in SICP, where the authors show a model of a joint bank account shared by two people, Paul and Peter. The bank account can be modeled as a process that operates on a stream of transactions and generates a stream of responses. But the difficulty is merging the individual streams of transactions by Paul and Peter in a purely functional style. They argue that one can't get away from explicit synchronization. They end this this chapter by pointing out the essential difference between the two views of modeling of the world: as a set of interacting stateful objects vs a single timeless stateless entity. They say "A grand unification has yet to emerge". IIRC this part has not changed between the first and second edition of SICP. [This is probably confusing so you might wish to read this last section of chapter 3 in the online copy of SICP, or view the very excellent video lecture 6B from MIT opencourseware @ about 52 minutes. The Q&A at the end is also relevant]

I am wondering if things have changed since then. I doubt FPLs can allow modeling such a merge but how far have we come? Can we express synchronization in some way in FPLs? Is there a grand unification of something like CSP and FPL? Are SICP authors looking at this the wrong way?

## Comment viewing options

### Functional reactive

Functional reactive programming?

### Functional Reactive Programming

FRP achieves a great deal of concurrency, synchronization, and coordination of behaviors within a pure functional system. This is achieved, essentially, by fusing explicit temporal logic with the functions: instead of (d→r) you essentially use ((t→d)→(t→r)). Time is explicit and monotonic, and thus allows other behavior to be expressed in non-monotonic, concurrent, continuous, incremental, reactive and interactive manners. If you want real-time and deterministic programming, you can precisely control logical latencies in order to mask the physical latencies and compute costs. FRP is very elegant, and has been developed considerably since about 1998. For more on the subject, I'll direct you to an explanation by Conall Elliott on Stack Overflow, which has its own links. Google will also return plenty.

Now, I pulled out my copy of SICP to take a look at the problem you describe. SICP describes in Chapter 3.5 the use of atemporal streams, which aren't quite what is necessary for concurrency. With today's hindsight from FRP, I find it a little cute that chapter 3.4 discusses how "Time is of the Essence" but never once introduces it explicitly! In particular time is kept 'implicit' to stateful operations. (State has a before & after, therefore state implicitly has time.) Chapter 3.5 ends with your example, but still keeps time 'implicit', even using the words "hidden-time".

FRP makes time explicit, unhidden. It provides a "grand unification", at least for modeling the world, as mentioned at the end of SICP.

FRP is a lot weaker for actually interacting with the world in an open, modular way. The trouble is that open (pluggable) modularity needs another property associated with state: identity. FRP handles time excellently, but doesn't even bother with identity.

Explicit temporal properties have enhanced many other paradigms: logic programming, concurrent constraint programming, etc. UTCC basically adds identity and time to concurrent constraint programming. My RDP adds identity and open modularity to FRP. (And I'm still figuring out whether I can also add constraint-programming to FRP, e.g. via generative grammars.)

[Seems Sean provided a pithy answer while I wrote this. :-]

### Surely, researching FRP

Surely researching FRP should be the next stop for the poster, I'm not sure if it was useful to add anything more than that (other than to talk about my opinions on FRP again :) ). Its amazing how great a read the SICP is considering its age (~1984).

I'm not sure that its accurate to say that FRP makes time explicit...you don't exactly get access to a reified clock as you might in a temporal or synchronous program language. Rather the main value to me is the way that FRP abstracts over time; e.g., property 'a' has the same meaning regardless over time of what value it is assigned to at a particular point in time, so we can talk about 'a + b' as an expression whose evaluation can also change over time. Of course, FRP is more than that (see Conal's explanation), and "abstracting over time" can be shared by other paradigms...like constraint systems (again, as mentioned by David).

FRP is also less than satisfying in complete programs: as an abstraction it works very well in some places but breaks down completely in other places. David mentioned identity, I would mention computations that evolve state over time in a non-functional way, like physics. This is why we keep looking for the best way to deal with state and time.

### Explicit delay, Integration

You have explicit time if you can describe a behavior or event in terms of specific times (t=20,t=40) or in terms of relative times (t+20). Many FRP designs support this. If you have integration, you get time as well... because the integration of 1 over time is time.

You also have explicit time if you're free to peek at the past or future of a value. Not all FRP implementations support that, since they are purely functional and one will usually need to wait on a future to see what happens. My temporal RDP does allow peeking at the 'anticipated' future, which offers some interesting properties for intelligent reaction.

Explicit time doesn't need to mean an explicit clock. :-)

I would mention computations that evolve state over time in a non-functional way, like physics.

FRP as supported in Haskell is good at evolving state over time. Numeric integration evolves state over continuous time, and "Switchers" evolve state over discrete events.

My bet is that you could do an excellent physics simulation in Haskell's FRP, and even include user input.

### I'm not aware of any

I'm not aware of any specific FRP implementations that support explicit time like that. If they do (Fran or Yampa), then I've never seen that feature used before. FrTime and other impure FRP systems will support similar behavior using a clock signal...but this is just wrapping a timer in a signal abstraction and I wouldn't considered it as explicit time.

My feeling is that Haskell FRP would work for numeric integration, but most physical simulations and all games don't fit into that category. Rather, you have to approximate and integrate at each step with respect to various forces that apply at that step. Things like collision make that much more messier.

### Reactive allows it easily

Reactive allows it easily enough. You can even turn a list of [(20,'a'),(40,'b'),...] into an event or behavior.

 Yampa also supports explicit time, hidden inside SF. In the original Frob/AFRP papers (predecessor to Yampa), SF a b was pretty much an arrow-compatible shorthand for (t->a)->(t->b). Modern Yampa adds a lot of specializations to this.

I can't recall the details of Fran, but I suspect it also allowed access to time - if not directly, then at least indirectly as an integral on a constant.

### It took me 5 years to

It took me 5 years to understand the point of FRP (from when I first read the Fran paper), so I definitely admit I could be missing something. What would a simple integrator such as Euler or Verlet would look like in Reactive? How would the physical constraints (say springs, gravity, pivots) be composed?

### Physical Constraints

I expect you have more experience than do I with regards to physical constraints, given your development of Bling.

But you could presumably describe forces on a body over time, compose these (additively), divide by mass, then integrate twice to get position as a function of time. Use fixpoints to add friction as a function of velocity and position. For pivots... do something similar with torques.

Reactive provides you a piecewise Euler integrator in its common library (FRP.Reactive.Behavior.integral). You are, of course, free to provide your own; the integrator doesn't use any details inaccessible to you.

### The real challenge is in

The real challenge is in describing or composing your forces to work with your integrator, and how you represent collision reactions. Verlet you just do displacement, but most mainstream physics engines these days are based on impulse accumulation, where each step you simply build and dispatch lists of applied forces.

I'm still not sure if the integral in Reactive means the same thing that it does in a physics engine.

### Integral in Reactive means

Integral in Reactive means the same thing that it does in Calculus.

If there's a difference, it is because Reactive uses continuous time. There are no discrete 'steps'. You aren't building and dispatching a list of forces; rather, you'd use a fixpoint or Arrow loop and continuously applying forces - at least logically; the actual computation is nice and lazy, computed for the moment of observation. (Continuous FRP only works because of laziness.)

Collisions would probably be represented as events. An 'event' in Reactive is actually a (potentially infinite) list of discrete instants of interest (which may recursively be events) - such as an infinite list of collisions along with impulse, vector, elasticity. Most of the Reactive library is about the interplay between behaviors and events.

### Ah, I was getting confused

Ah, I was getting confused with Euler's method, which is a discrete. All integrators in physics engines are discrete approximations. An impulse is a discrete approximation of a collision reaction, which really isn't instantaneous (a high amount of force is applied for a short amount of time). I like the idea that at least on the surface, the user applies forces continuously (or at least declaratively), but on the inside, their should be some efficient discrete approximations going on.

This is very much above my head. I'm neither the physics geek or the Haskell geek to work out anything effective here. But I would definitely love to see what someone else could come up with.

### on the inside, their should

on the inside, their should be some efficient discrete approximations going on

I don't know. I'd certainly appreciate if the language supported true integrals based upon abstract analysis of the functions involved, and only resorted to discrete approximations when it doesn't know how to produce the integral function. For performance and precision and accuracy, producing the integral function then computing it is likely superior to discrete approximations for the vast majority of functions. Please, sprinkle a little Mathematica into my Haskell...

For now, however, Haskell uses discrete approximations just like everything else. (This doesn't change the meaning of integral. ;-) Reactive does provide a common-case optimization: a behavior may vary between discrete, constant values (e.g. holding the value 3 between times 1.0 and 10.0) and continuous time-varying functions. While this optimization is provided primarily for reactive efficiency (only recompute what might have changed), it also optimizes the integral.

### Computing the integral

Computing the integral symbolically is not necessarily cheaper in animations. For example suppose you have a quantity x(t) = integrate(f(...)) that is the position coordinate of a ball on the screen. Now you want to know x(t=0), x(t=0.1), x(t=0.2), .... At time t=0.2 you don't have to numerically approximate x from scratch, because you already computed x(t=0.1). The numerical algorithm uses this value to compute the approximation at t=0.2. This could well be cheaper than evaluating the symbolic integral (the formula for it is presumably computed at compile time, or otherwise computed beforehand). Not to mention that most functions don't have a symbolic integral, especially if the integrand involves user input (which happens in most programs).

### Relative Efforts for Relative Benefits

You make valid points.

I'm assuming that the symbolic integral is computed at runtime (lazily), and would only be used for many common-case expressions that are relatively easy to integrate (especially, polynomials and trigonometry). Further, I'm assuming that one might do so 'partially', i.e. to the extent that an expression is a sum of other expressions one has a lot of flexibility to only perform symbolic integrals on some elements. This is, effectively, just an extension to what Reactive already does with its 'constant' behaviors.

I've considered a few approaches for this in Haskell, i.e. via use of GADTs or the DeepApplicative used by Conal for tangible values. Mind you, I've never got past ruminating on the subject, and couldn't speak much about performance beyond the theoretical. But it might also be worth the effort for purposes of edge-detection and min-max analysis.

That said, FRP is already incremental in the temporal dimension. I'd like to see better support for making it also incremental in the spatial dimension - e.g. when working with large, time-varying maps or data sets.

### Thank you but....

Thank you both for your detailed responses.

FRP is quite elegant but, as far as I understand it, I don't think it does a grand unification of the kind SICP authors talked about. As per Conal Eliott's response on Stack overflow, FRP's concurrency is fine-grained, determinate, and continuous. May be I am on a different frequency but right there I have a problem. In real life events occur in an indeterminate or random order. For example, an OS for a real machine has to deal with many events which may occur in any order or simultaneously (while core-1 is processing an input packet, core-2 may have to service a disk interrupt, and so on). This is straightforward to model with Dijkstra's guarded commands (and in CSP based languages)and while not easy, one can still reason about it. How would I do this in FRP?

And by "model" I do not mean just simulation. In order to interact with reality one has to have a model of it (you may not know when the next packet will arrive but you know the protocol it will follow etc). And often we connect these simulators to real world (Qemu for example).

To expose my bias, I don't see how you can do without a proper treatment of state in any grand unification.

### To expose my bias, I don't

To expose my bias, I don't see how you can do without a proper treatment of state in any grand unification.

I don't think anyone here is claiming that you can. FRP is a step to declaratively dealing with and manipulating time-varying values, which could be backed by state. The trick to dealing with state is to squirrel it off somewhere, encapsulate it, and provide an elegant abstraction for manipulating it and reasoning about it.

### You've pointed it out

You've pointed it out yourself. In the will to tackle with "real life events occur[ing] in an indeterminate or random order" we have to introduce notions such as non-determinism, undecidability, oracles, etc.

The problem is the thing one wants to model and implement at the end of the day is (presumably) software systems with computational capabilities in finite time, be they deterministic or not.

But one has also to pay attention to which "universe" belongs/what is the true nature of the (non-)determinism, etc, spoken of, in the ontology eventually made up for consideration purposes, pre-modeling.

Are those clearly defined as being within the bounds of the software/computational models discussed, or, are those of the kind that actually (mischievously) escapes them, really belonging instead to some so-called "real-world"/non-formally modelled (anywhere, and/or not enough) human factors?

I suspect the confusion can be done quite easily by us, "software/language folks", when we borrow vocabularies from other domains and ontologies. We have many examples that sometimes, while we're careful enough to define the words to denote the concepts we're interested in, we may also forget the true nature of their genuinely originating context (hence the need for long, thorough researches over decades) and I suspect some just haven't been studied long enough yet -- we've barely started to understand the relations between computation and formalisms -- tell me about understanding how Mr./Mrs/Ms. Smith decides what to do next, with or without some form of "computer" within direct reach...

My assumption is that, anyway, as appealing and full of potential as FRP can be, indeed, it'll have just like any other paradigm to acknowledge one of the fundamental, say, "implementation limitation factors" of scientific investigation on some of its "borderline" objects, such as the observer effect (from which ramifications to different interpretations of our world are, still, subject to recurrent debates in physics; e.g., Copenhagen interpretation, and the like).

[Edit: note I didn't mean to say above that, e.g., non-determinism is out of reach from FRP; the point I was trying to make is just that FRP, like any others, has a scope and its clear recognition and definition of it, as well as the ease of resolution of issues for its chosen purposes and objectives by implementations, will be, as usual, among the determining factors for its acceptance/success.]

### This being said... I must

This being said... I must confess I find FRP really "cool", and the only new (right?) programming paradigm that makes sense to me to feel as being probably the "next big thing". My only regret is I haven't found enough time yet to understand better how its beam of ideas/principles actually work through implementations, and even less, about the full extent of the potential we can hope for it, in the long run.

I suspect, though, the core principles are of that kind of "pure"/optimum set able to embrace the union of several other paradigms at the same time (and maybe others yet to be discovered) very elegantly, and for the long term.

Definitely the kind of thing I'm willing to watch in the domain of PLs (not my primary interest today; being more into modeling tools... before runtime... and until FRP takes over the world and we don't even need dehydrated-intermediary-modeling anymore ;)

### In real life events occur in

In real life events occur in an indeterminate or random order.

That can be handled in FRP. UIs have been written in FRP, for example. One updates the behavior model for the keyboard event-stream based upon the arrival of new keyboard events. (Reactive cheats a little with unsafeIO to implement 'future' values.)

In order to interact with reality one has to have a model of it (you may not know when the next packet will arrive but you know the protocol it will follow etc).

That isn't necessary, neither having a model of reality nor knowing the protocol upon which a packet arrives. And, more relevantly, any model of the outside world does not need to be part of your program.

To expose my bias, I don't see how you can do without a proper treatment of state in any grand unification.

There are many ways to have 'state' be implicit. SICP provided an example for stream processing - state being embedded in a fold over an input stream. FRP supports implicit state in its behavior definitions, and does a better job with because one can merge event-streams based upon 'when' each event takes place.

### Functional programming and non-determinism

As per Conal Eliott's response on Stack overflow, FRP's concurrency is fine-grained, determinate, and continuous. May be I am on a different frequency but right there I have a problem. In real life events occur in an indeterminate or random order. For example, an OS for a real machine has to deal with many events which may occur in any order or simultaneously (while core-1 is processing an input packet, core-2 may have to service a disk interrupt, and so on).

I think that the work of Peter Van Roy is very on-topic here. In his must-read book CTM, he describe the programming language Oz as a stack/tree of layers, each embedding a particular paradigm. He goes into great length to show how one may progressively slide from pure, deterministic functional programming to pure concurrent programming with no observable nondeterminism, and then in a final step add observable nondeterminism.

A wise advice given in this book is that the observable nondeterminism can usually be factored out of the main system. In your example for example, one may consider that there are two pure sequential event streams (one for each core), that get mixed in an observably nondeterministic way into a single event stream that is the input of the OS (say a stream of interleaved system calls). Only the interleaving of the two streams are nondeterministic : the main OS loop can still be described without observable nondeterminism. You have a system where the nondeterminism is carefully confined.

In terms of programming languages, this means that it is interesting to support programming constructs with no observable nondeterminism, while still allowing to gracefully degenerate into non-deterministic behaviour is specific, encapsulated parts of the program.

I don't know if the CTM is accessible online, and it's certainly a long book to read. The shorter paper Programming Paradigms for Dummies: What Every Programmer Should Know is discussed on LtU, and could be considered a partial, condensed summary of the book. Section 6.1 (Avoiding nondeterminism in a concurrent language) is particulary on-topic, with a (short) discussion/overview/comparison of Declarative Concurrency, Constraint Programmating, Functional Reactive Programming, Discrete Synchronous Programming and Message-Passing Concurrency.

### Thank you

for the references to CTM and the shorter paper. Much to digest and think about before I can properly respond. But one comment. I am not sure that we can avoid problems like deadlock or race conditions in the way suggested by Peter Van Roy. In response to my example you commented that "the main OS loop can be described without observable nondeterminism" but extending this further, two or three such OSes can interact in a such way that they can deadlock (or livelock, where no progress gets made). So while factoring out "observable nondeterminism", or "squirreling off state somewhere" (as Sean McDirmid put it) is worthwhile, IMHO, as long as the language doesn't handle state, it can't detect such problems.

I find it interesting that in this comment on the Smalltalk design principles thread, Shap complains about languages not handling temporal requirements; which to me is part of the same puzzle.

### It sounds like your problem

It sounds like your problem is with how OSs are designed, not PLs. OSs are introducing additional semantics and throwing out any reactivity guarantees being provided by the reactive code in your domain. Most applications are not sensitive to this, but if they are, it seems like you can view the OS as additional application code (e.g., supporting library) that does not respect reasonable language guarantees, at which point it doesn't seem far off from a buggy C component that randomly modifies memory.

### your problem is with how OSs

your problem is with how OSs are designed, not PLs. OSs are introducing additional semantics and throwing out any reactivity guarantees

Well, the other side of that coin is: if your PL isn't adequate for capturing semantics in an OS, perhaps it is the PL that should be judged at fault.

A comment from Thomas Lord asks: does your language offer a good "explanation" for asynchronous I/O? (Are the abstractions good?)

That seems a critical point. The physical properties of your hardware need to have an adequate explanation in your language - something that will fit without too much complexity or overheads; otherwise, you'll end up hacking your language to make things fit, and you'll lose many nice properties of your PL.

A PL that is based around processing asynchronous signals and streams (such as FRP) might well have advantages over most, since most CPU interaction with hardware is asynchronous these days. FRP wouldn't have a problem interacting with DMA (or at least the vast majority of its applications). But I imagine there would be issues with MMIO - in particular, when old state is overwritten without approval from a CPU. FRP - at least as espoused by Conal Elliott and Paul Hudak - lacks an adequate 'explanation' for that sort of data loss. If we wanted to fit MMIO into the semantics, we might need to explicitly maintain a series of 'snapshots' of input memory - which would be overhead and undesirable complexity.