## Oracles

Doctoral thesis for the degree of Doctor of Philosophy

Fudgets --

Purely Functional Processes with applications to Graphical User Interfaces

Magnus Carlsson
Thomas Hallgren

Carlsson and Hallgren's 1998 thesis on functional window gadgets is of course not new. However, I want to call attention back to it because it discusses the idea of an Oracle. I submit that the notion of a system that supplies oracles (as many as needed) from the runtime environment provides the best way to account for nondeterminism/indeterminism in a notation or a framework of reasoning that embraces Referential Transparency. I submit that Oracles should be more on the minds of those who write about designs for pure functional programming languages and systems than they seem to be today.

(Unfortunately, the term "Oracle" in this meaning is not in practice Googlable, because anything you'd turn up would be swamped by the returns about the Oracle brand database management system.)

When I say indeterminism, the kind of indeterminism I'm talking about is the kind where information comes from various sources over rough roads, and the right thing to do is act differently depending on what information happens to arrive first, affecting output. The right thing to do for the sake of the humans the computer is interacting with may also depend on the time required for various calculations. So there needs to be a way to model the fact that a routine needs to ask for a calculation that actually takes up time. A referentially transparent language can talk about what has to be calculated, but it has trouble dealing with the fact that calculations can cost something and sometimes one may finish before or after another, or before or after some system input (e. g., command from human to go get some other information, or maybe to query the status of the calculation, or change the resources devoted to it or the priority of their use, or suspend the calculation, or cancel it).

Given an oracle o from the environment, and two data sources a and b, we can have a function decide, where  decide(o, a, b)  returns something like (Left, a) if a is available before b, or (Right, b) if b is available first (assume Left and Right are just symbols, distinct from one another). This operation consumes the oracle o and no attempt must be made to use it again. The Referentially Transparent explanation for why this function application is able to determine whether the computation of a or b finished first is that from its magic knowledge from the gods, o knew all along which calculation was destined to finish first, hence the metaphor of an oracle as the term for this concept. Of course, an oracle can't be printed. It's just used once, at the point where the reality of the outcome is going to be determined, and of course the prediction never fails since it can't in practice be tested as a prediction.

An alternative way to model indeterminism in the framework of referential transparency is via the sets of Hughes and O'Donnell. I thank Fergus Henderson for his easy-to-understand explanation of the Hughes/O'Donnell approach. The outputs of indeterminate function applications are understood to be sets of all possible results. The implementation only outputs one of the results, however, at the exit to the system. I used to think I could articulate an argument why this approach will not satisfy so well as the oracles approach. In the Hughes/O'Donnel sets account, intermediate results are not understood to contain any decision about what result was computed (although of course in the implementation, the decision would be made and only its result stored). Since the result contains in concept all the possible results, I'm not sure there is a semantically sound way to think about storing it persistently so as to avoid having to repeat long calculations in case of a crash of the volatile memory. Maybe bye and bye I'll think of an argument again on the comparison of these approaches. Or maybe someone else will.

Does anyone propose a third answer?

It seems to me that in the design of a programming system demonstrating orthogonal persistence (transparent persistency), the consumption of an oracle may be a significant event. Or maybe not, maybe in such a design, all itermediate results are as significant as the time it took to calculate them. The real significant event is system output, because that embodies the decisions taken at all the indeterminate choice points that contributed to the output. Subsequent outputs have to honor those choices.

## Comment viewing options

### In the olden days

LML had a fully fledged implementation of oracles running of a multi-processor (a Sequent Symmetry) back in ca 1989. The description in the Fudgets thesis refers to this implementation. It was fairly pleasent to work with and quite practical.

Since the constraint that an oracle can only be used once could not be statically guaranteed we instead allowed oracles to be used multiple times. Normally you would not do this, but if you did the right thing happened, i.e., an oracle makes the same choice every time you use it.
A benefit of this is that you could have a non-deterministic program, but if you happen to save the tree of oracles you could actually run it again with the same choices made.

These days everything is done with monads so other solutions are sometimes forgotten.

### Re: In the olden days

Thanks, lennart, for the enlightening comment. I had never thought of the idea of allowing multiple uses of the oracle, where the implementation would just make sure that all uses reflected the same choice.

". . . you could actually run it again with the same choices made." -- Sounds like a boon for regression testing.

Your recollection of oracles as pleasent to use and practical suggests that I may be barking up the correct tree.

Also, it's good that we now have some reference to (or closer to?) where the oracle idea first came up. I remembered that Carlsson and Hallgren didn't claim to have originated it. But their paper certainly informed me about the idea.

As for monads, I don't think they're a solution to this problem. They're just a layer of notation over the pure functional model. Unless maybe the "state" carried along inside the monad is a stream of oracles, or maybe a Hughes/O'Donnel set? How do the Haskell mavens write non-deterministc programs? Do they at all?

(By the way, this thread should probably mention "commited-choice" to attract searches on that term. Henderson, my Hughes/O'Donnel explainer, says committed-choice nondeterminism is what my question really was about.)

### Oracle inventor

I don't know who first came up with the idea, but both Warren Burton and I had the idea. Warren published about it first, so I never wrote anything much about it.

In Haskell of today you would do it by forking a new thread using forkIO and use, e.g., an MVar. This pushes the non-determinism into the kitchen sink, the IO monad.

### Alan Turing

The concept of oracle was created by Alan Turing.
http://plato.stanford.edu/entries/turing/

### Turing Oracles Different

That's an interesting reference. I'm not quite sure of all the implications of Turing's "oracle"; however, I'm quite sure it isn't the same as the oracle under discussion here ("lennart/Burton oracles"?). In the article about Turing, it says he said that no machine could do the oracle thing. But our oracles here just need to "know" which computation finshes first or which data arrive first or whether a computation finishes before an input event, that sort of thing, which the real machine does decide or determine or discern. After all, many of these programs have been written to handle events based on the order that they happen in and they run on real computers and execute correctly. It's easy to see how to write them in assembly language. The machine knows. The point here is to be able to express those programs with referential transparency.

### I forgot about the original Turing "not a machine" requirement

Formal oracle machines treat the oracle as a black box that could be a machine.

### LIM

Apropos of nothing, your description of oracles reminded me of a very cute language: LIM, the Language of the Included Miracle. (That is the only currently available reference I can find; lim has been around for quite a while.)

So what are the components of the argument to and result from applying the main routine in Haskell that concern indeterminism (the temporal kind)? For that matter, what are those that concern I/O? Describe without reference to the monad. The monad is, after all, only notation.

In other words, exactly what is being buried in the kitchen sink? Dig it up for me, please. I want to sniff it.

### RT Explanation of ToonTalk's Nests, Birds, and Tooly

Up till a few days ago, I had thought ToonTalk's bird/nest communication system was not referentially transparent when you applied Maggie to a bird.

Maggie is ToonTalk's magic wand that copies objects. For example, applying Maggie to a nest produces two nests, copies of the original.

For those not familiar, here's how the fundamental communication mechanism in programming language ToonTalk works. ToonTalk offers a "toolbox", named Tooly, containing an unlimited supply of eggs. The programmer or a robot can extract an egg from Tooly. A bird hatches out of the egg and presents her nest. The bird and nest can be passed around the program and can become separated from each other in terms of who has control of them. However, they remain linked in that whatever is given to the bird, she carries to the nest.

The communication system of an associated bird/nest pair remains relatively simple until we apply Maggie to the bird. The result is two birds that have the habit of flying to the same nest. One bird could be carried off somewhere and given, say, a series of keyboard events. Perhaps the other bird is sent to an agency that calculates successive prime numbers or digits of pi or something and gives them to the bird. What can we observe at the nest? Digits of pi and keyboard events will arrive there interleaved in some order depending on asynchronous processes: the pounding of the keyboard, and the calculation of pi.

For purposes of taking a viewpoint about the values and meanings of the ToonTalk objects, I had decided to view the sequence of objects arriving at a nest as the value of that nest. Let us return for a moment to the simple case of one bird/nest pair before we decided to split the bird using Maggie. In that case, I used to conceive the value of the nest as the sequence of objects to arrive on it over its lifetime. The value of the bird, I suggested, is the sequence of objects to be given to the bird. In the simple case of no bird-splitting. the value of the nest exactly equals the value of its corresponding bird.

But what happens after we split a bird? We have a communication mechanism that is implementing an indeterministic choice. The resulting sequence of objects arriving on the nest cannot be predicted based on the "values" I used to assign to the two splits of the bird. Knowing those "values" (the sequences of objects to be given to the respective birds) tells us the objects to expect to arrive on the nest, but does not tell us their order. Applying Maggie to the birds has had the effect of building for us an indeterministic merge of two streams of objects. So, given my original value interpretation of nests and birds, the application of Maggie to a bird did not appear as a Referentially Transparent operation.

Now I will tell you a new story about the values of the birds and nests, and in this story, everything is Referentially Transparent.

According to my new story, each of the eggs waiting in Tooly comes informed with an unlimited supply of oracles. When we extract an egg and put it down, the resulting bird/nest pair contains these oracles. They constitute part of its value. This supply of oracles remains undisturbed so long as we don't split the bird.

When we split the bird with Maggie, we create a merge point in the data paths represented by the birds and the nest. The merge point consumes oracles from the oracle supply that had inhered to the bird/nest system from its origin in Tooly. Each oracle consumed[1] informs the merge point as to from which of the two tributary streams to accept the next object for delivery to the nest.

That's my new story. There's hand-waving in it, but I think it points the way to a rigorous RT interpretation.

Now, I have a question.

In Janus, a predecessor language to ToonTalk, the "birds" did not preserve the order of the messages (they were said to have bag semantics). In ToonTalk, the birds do preserve order; they carry ordered streams of messages. Is there a good reason for a new language to imitate Janus in this regard rather than ToonTalk?

I can speculate on some of the possible reasons the Janus inventors, Kahn and Saraswat, might have had in mind for using bag semantics. For one thing, a purpose of the article in which they introduced Janus was to say something about how it was possible to generalize from the semantics of the Actor model associated with Hewitt and Agha. So, bag semantics in practice if not in name, was already there in the Actor model. But even if we look back to the time of the conception and exposure of the actor model, we can speculate about why the bag semantics was there, and can ask whether the reasons then justify using that semantics in a new language, a new language that could do everything the ToonTalk way instead, now that we understand that way.

If the purpose of the bag semantics is to provide an indeterministic merge of event signals from two or more sources, the new language does not need it for that purpose, because it can just imitate Tooly instead, which as I claim to have pointed out above, supplies enough oracles to give us all the merges we may need.

But what about modeling the economics of network communication? In network protocols, datagram communication is considered simpler than commmunication of an ordered stream of messages. UDP is considered cheaper than TCP in cases where it suffices. A characteristic of datagram communication is it does not guarantee to preserve the order of the messages. Maybe to avoid having the programming language constrain the programmers to overspecify the communication needed, the programming language should treat as fundamental, a communication that does not preserve order.

However, preservation of order is not the only difference between UDP and TCP. In datagram protocol, it is possible for messages to be dropped entirely. This is not the case in the communication channels of the actor model and Janus. They still need the service, provided by TCP, of making sure no messages get dropped. So, they don't model datagram communication, but something more expensive than that.

So, is there a good reason to use bag semantics in a new language?

-----
[1] word "consumed" added on edit.

### Message Ordering Semantics

While I can see several parallelism and partial-evaluation advantages of oracular semantics within a pure functional sub-system (where there are no side-effects for processing a message in advance of receiving it) ToonTalk and Janus do not seem to be such systems.

Also, semantics are only useful if their specification allows for invariants to be maintained across a flexible range of compositions. For example, with 'message-stream' semantics - as you seem to be pursuing with this concept of oracles and nests - ideally down-stream (indirect) observers would see the same message ordering as up-stream observers. But, in practice, there is a lot of intermediate computational machinery... more actors, more nests, birds, robots and boxes, i.e. if messages M0 and M1 arrive at nest/actor A, then get processed causing (somewhere down the line) messages N0 and N1 arrive at nest/actor B as a result, what good does asserting the order M0 < M1 do if you cannot also assert N0 < N1? And if you can assert N0 < N1, what restrictions are there on the machinery between A and B that allow you to assert it?

One reason for 'bag' semantics in Actors model is that those semantics - which promise very little - at least remain consistent across a wide variety of compositions - including continuation-passing styles. Similarly, stateless actors (those that never change their future behavior in response to a message) may process all messages concurrently, and still be consistent with bag semantics.

### Message Ordering Semantics

I do not understand what you mean by mentioning continuation-passing styles as optional in the context of the actor model (Hewitt/Agha). Isn't all actor programming continuation passing? I believe the model doesn't include subroutine call and return.

As for your question about propagation of order through a network of state machines in ToonTalk, I can think of example programs where the order would be preserved, and examples where it would not. I suppose programs could be written where proving that order would or would not be preserved would make an NP-complete problem.

A straightforward example in ToonTalk where order would be preserved would have nest A in a box [ #A | ^B ] where #A is nest A and ^B is bird B being the only bird connected to nest B. A team of two robots attends this box. The first robot recognizes [ M0 | ^B ], discards M0, constructs N0, hands N0 to ^B, and terminates without bombing. The second robot similarly recognizes [ M1 | ^B ], discards M1, constructs N1, hands N1 to ^B, and terminates it doesn't matter how. In this case, we can prove the order of N0, N1. We know that the robot team doesn't process M1 before having processed M0. Robot firing is atomic. The system commits N0 into the ^B --> #B channel in the same transaction as it consumes M0 from #A. Until that happens, the prerequisites for generation of N1 are not present.

Going back to your first paragraph, why would you not regard a ToonTalk (minus "sensors") program as a purely functional subsystem? Is that because it has logical variables? I suggest that they are still RT. Does purely functional mean something more than RT?

Now, I want to go on to remark about input devices. For example, let us say that code in the new language will be translated into Javascript that will interact with the document object model of a browser page. There can be buttons on the screen, and other input widgets. How are the input widgets going to communicate the system inputs to the program (in terms of the new language)? In general, the behavior of the programmed system, when it meets the requirements, may depend on the timing of the input events. We could have the input widgets communicate the input events into a bag channel and timestamp them. That would provide enough information that the program could behave correctly, assuming it got enough CPU time to react in time to meet any real-time requirements. However, it strikes me that for a large and useful subset of the applications, the crux of the timing of the input events is simply their order. The straightforward thing seems for the language to support a stream of messages for expressing these inputs, even if the language also supports bag channels. Does anyone offer any counterargument?

### Isn't all actor programming

Isn't all actor programming continuation passing?

No. While CPS is often used, so are flow programming and pipes and filters styles, among others.

I suppose programs could be written where proving that order would or would not be preserved would make an NP-complete problem.

In general, I suspect it would be undecidable.

Why would you not regard a ToonTalk (minus "sensors") program as a purely functional subsystem?

ToonTalk is certainly a stateful language. If you attempted to dump a message-stream into ToonTalk, and supported the concurrency implied by the whole robots/birds/etc. mess, you (in general) would get some strange ordering effects where message M0 would potentially see effects of messages that were supposedly sequenced behind it in the stream.

One could write some 'purely functional' programs even in a stateful language. However, since TT is a stateful language, you won't in general be able to use FP compositions, optimizations, abstractions, etc.

The straightforward thing seems for the language to support a stream of messages for expressing these inputs, even if the language also supports bag channels.

The 'straightforward' approach is to serialize the whole program. You could do that for C just as easily as for Haskell.

The infinite 'message-stream' is suitable for describing inputs to purely functional system, and properly serializes everything (while allowing for much data-parallelism). But the message-stream is not particularly straightforward when dealing with stateful systems, emission of side-effects, concurrency semantics, distribution, that sort of thing.

Other approaches to deal with widgets are also quite useful, and should be mentioned since you keep talking about 'new languages' in general while assuming a particular application architecture. Functional-reactive widgets and forms (data-binding and transformation), with objects or procedures to receive and process user inputs, are of greater interest to me than this oracular/functional approach. I'm not at all fond of MVC or DOM.

### Stateful?

Suppose we have a system that is programmed in a purely functional, lazy language. This system has exactly one input device, a keyboard, and one output device, a typewriter. The streams of input and output are presented as lists. To program the system, one defines a function "main" that the runtime system applies to the list of input characters. The runtime system drives the typewriter from the list returned by "main".

Let the function definition syntax be

<name> ":=" "\" <parameter list> ":" <expression>

I'll write function application as the function name followed by the arguments, and use parentheses for grouping.

Assume that the value of an argument can represent a character that can be typed, a list of values, or a truth value true or false.

Assume character literals written in single quotes, so for example 'x' is a literal for "x".

And assume built-in functions are available:

(equal a b) returns true if a has the same value as b, else false.
(car a) returns the first element of list a.
(cdr a) returns the remainder of list a after removal of the first element.
(cons a b) returns a list whose first element is a and the rest of the list is b.
(if a b c) returns b if a is true, c if a is false, melts the computer if a is not true or false.

Suppose we program the system so:
 main := \ in : (if (equal (car in) 'a')   (foo (cdr in))   (cons (car in) (main (cdr in)) ) foo := \ in : (cons '1' (foo (cdr in))) 

If the input is "holiday", the output should be "holid11". Is the system then in a state? Is this programming language stateful??

### Re: Stateful.

It looks like the output for the program above would be 'holid1'. The 'a' would not produce an output.

Is the system then in a state?

Certainly, yes. After the input of "holiday" the 'system' described by this program would be in a state of indefinitely waiting upon the stream 'in' from the keyboard to produce another value, which shall then be processed by 'foo'. The program internally describes a state-machine for processing a stream of inputs.

Is this programming language stateful?

No, it is not. Language support for stateful programming implies the ability to directly express and abstract state.

That said, one does not require language support for stateful programming in order to achieve stateful programs, especially not with a Turing complete language like the one you describe above. Technically, you could write a C language runtime in the above language, though it would be quite a hassle to compose new input and output devices.

### Toontalk Stateful?

Then why do you tar ToonTalk with the "stateful" brush? I say it is no more stateful than the purely functional language I sketched above. When a robot puts a robot team and a box on a truck (on the floor), this operates analogously to my example above where "main" calls "foo" to provide the rest of the output. The placement of the "foo" call in a syntactic slot that takes the expression for the result to be returned by "main", corresponds to a placement of a bird for system output in the box to be placed on the truck, in a translation of the example into ToonTalk.

### ToonTalk is stateful.

Being 'stateful' [for a language] is a description of what you can express. That you could, with some discipline, avoid expressing it is (and must be) irrelevant... otherwise you could say that even C language shouldn't be "tarred with the stateful brush" simply because you have the ability to write programs that apply the language in a state-free manner.

Boxes in ToonTalk are mutable, and may serve as a shared mutable reference via duplicate birds sending messages to a common nest. That makes ToonTalk a stateful programming language. You can create new shared cells by distributing a nest along with a box and a team of robots. You can perform state queries (in continuation-passing style) by distributing a bird inside a box carried by a bird. Your ability to write programs that do not take advantage of these features is (and must be) irrelevant.

ToonTalk certainly is stateful. With regards to concurrency, it effectively matches the Actors model, with any given box manipulated by only one robot at a time. In a concurrency scenario, actors model can still suffer race conditions between three or more actors (two actors with a shared reference to a third)... i.e. one can still describe and fail the traditional 'banking' example, with one team of robots serving as a 'bank' and the others asking for certain actions on a shared account (e.g. each attempting to withdraw half the account with simple 'get' and 'set' operations).

### Searching for Boundary of Statefulness Concept

Suppose we have a language with one-shot logical variables, concurrent "procedures" that can fix the values of the logical variables via "teller" tokens, and a runtime system that supplies oracles. Then is that a stateful language? Might it be referentially transparent?

### Single Assignment Variables

Oz/Mozart is a language such as you describe. New variables are typically single assignment (via unification). In Oz, if you attempt to observe an unassigned variable, your process pauses until the variable is assigned. Further, if you attempt to unify a new value with an already assigned variable - and the values happen to not be identical - then the latter process exhibits 'failure'. Peter Van Roy terms this 'declarative concurrency', which exhibits no observable non-determinism in a success case. (Partial failures aren't handled nearly so cleanly.)

Whether a language with single assignment logic variables is 'stateful' will depend on the semantics for observing and assigning those variables. Consider the following:

• Can a process observing a single-assignment variable distinguish between an unassigned variable and an assigned one? If so, then the language will exhibit statefulness via manipulations to whether the variables are assigned.
• Can multiple processes race to assign the single-assignment variable to different values? If so, what are the semantics, and will they allow for stateful communications?

### Degrees of state

Whether a language with single assignment logic variables is 'stateful' will depend on the semantics for observing and assigning those variables.

I think we should get over the idea of "stateless" once and for all and start to think instead in terms of the complexity of state along a continuum. Simpler state spaces are easier to manage and predict, and are to be preferred, but striving for "pure statelessness" is to live in of willful denial of the reality that computation is conceptually stateful.

The single assignment variable is definitely stateful, but at the very low end of the state spectrum. It has a state space with two options: unassigned and assigned. Any language that can be said to support this concept has no choice but to respond differently to an operation when a variable is in one or the other state, and responding differently in some situations rather than others is pretty much the definition of state.

Even pure functional programs have state: different inputs have different results, except that the state space is fixed at runtime, itself a stateful concept, if you think about it.

Let's spend our time instead striving to constrain our state so that it doesn't cause so many problems; then we'll all be happier.

### I do...

Ha! Yes, I do... I do totally concur with this. :)

+1 !

Otherwise, on the discussion about those "Oracles" per se, I don't think I got all the subtleties of your exchanges above and I don't think I could comment usefully, there.

Except for, maybe... just mentioning that the type of "Oracle" that Turing was thinking of, was likely (well, in my view) to be a form of acceptance of AC (the axiom of choice) laid underneath the whole computability theory itself (thus, likely not the type of "Oracle" you all are talking about here above, I suppose).

### Burton/lennart Oracles

I think the Burton/lennart oracles, which is what I brought up here, differ from the "oracle" concept used by Turing. The point of the former is that if you have a fresh oracle passed in from the environment, it will tell you which calculation finishes first, or on which channel input arrives first, or whether an input arrives before a calculation finishes.

In Haskell, if you strip away the syntactic sugaring that makes operations involving monads briefer to express than they would otherwise be, what remains as the explanation of how a program can satisfy requirements to react? What is it that the runtime system supplies to the main function, and how does the runtime system interpret the result returned by the main function, to make reactive programming possible? For example, if the program is supposed to play chess with a human, then while the program is waiting for the human to decide her move, the program should be thinking ahead about the game. But as soon as the human's move is communicated to the computer, the course of the program's deliberations should change significantly. How is this managed in Haskell?

### Recognizing Success

How shall we know when we have constrained our state sufficiently?

### From the other side

How shall we know when we have constrained our state sufficiently?

I think that is the wrong way to put it. The goal is to use the least amount of state (the most constrained state space) that you need to solve your problem elegantly.

As with any other skill, you move closer to this goal through a deep understanding of your problem and by developing good judgment through experience and continued learning.

### Programming Languages Weblog

The goal is to use the least amount of state (the most constrained state space) that you need to solve your problem [emphasis mine] elegantly.

I am asking from the viewpoint of the design of a programming language to be applicable to a wide variety of problems.

### Mutatis Mutandis

I am asking from the viewpoint of the design of a programming language to be applicable to a wide variety of problems.

Then I guess you want a language that allows you to express the level of state that you want to solve the problem at hand.

An example would be the kernel language approach described in CTM.

### Kernel Language, RT, and Committed-Choice Nondeterminism

Marc Hamann, you say

Then I guess you want a language that allows you to express the level of state that you want to solve the problem at hand.

An example would be the kernel language approach described in CTM.

If I understand correctly, you are recommending an approach to programming-language design in which the designer offers a kernel language, and application writers add what layers on top of it are appropriate to their needs. Well, I think that is a good idea.

I see the purpose of the kernel language, however, as being more than just a technique to drive the computer, reflecting the typical structure of today's computer. I think the designers should choose the kernel language to support reasoning about programs. That way, tools and techniques that work for reasoning about programs in the kernel language will be effectively applicable about programs written in all the higher level languages that are defined in terms of the kernel language.

In thinking about the question of what fundamental ideas the kernel language should be based on, I see at "That way madness lies" et seq, comments by Chris Rathman, Shae Erisson, and Daniel Yokomizo about how good RT is. So, I'm convinced that the kernel language should be RT.

Since an RT language is deterministic, there has to be a source of nondeterminism "from outside the language". What better than an oracle pool being passed into the main routine to express this?

### The Hierarchy of Statefulness

If I understand correctly, you are recommending an approach to programming-language design in which the designer offers a kernel language, and application writers add what layers on top of it are appropriate to their needs. Well, I think that is a good idea.

The feature of the Oz kernel language that I'm recommending here is that it really provides a hierarchy of languages. You start with a referentially transparent core of primitives, and then can selectively add more statefulness by using new primitives.

This makes you think about state as a continuum rather than as a binary distinction, and makes explicit how much power you using along this continuum.

So for example, you can have a pure declaratively-concurrent program and add a mild form of monotonic state to it by using the primitive IsDet, which allows you to test whether a variable is assigned yet or not.

Since an RT language is deterministic, there has to be a source of nondeterminism "from outside the language".

I think there is an inherent tension between ease of reasoning about computation and faithfulness in modeling it.

Referential transparency is a conceptual simplification that makes it much easier to reason about a program, and for this reason it is very desirable to have as much of it as you can manage.

But the truth is that the world in general and computation in particular is stateful, and sometimes the simplest way to deal with that is to accept this and deal with it directly.

To take a blazingly obvious example, think about the function now () that returns the current time. Obviously it is conceptually not RT. I could try to make it RT by parameterizing all the functions in my application by some kind of time increment value, and then it becomes the RT now ( tick ) or whatever.

But that would be awfully heavy for most applications. So why not just make all the rest of the app RT, and admit the non-RT primitive now () exactly where you need it?

You start with a referentially transparent core of primitives, and then can selectively add more statefulness by using new primitives.

I also appreciate this approach.

In addition to various statefulness and side-effects concept, it is useful to understand and control where certain forms of delay (especially arbitrary or infinite delays and non-termination) are introduced, where non-determinism is introduced, and where 'partial failure' is introduced. (For security, we also want fine-grained control over side-effects, access to state, and confinement.)

But I posit that the language needs to do more than allow you to selectively add concepts. It also needs to support you in achieving this discipline... at least if it is ever to scale beyond whatever toy projects can be developed by just a few programmers. After all, except for toys written in our education, most programs written must integrate components written by 'others'. And this compromises ability of 'you' to 'selectively add more statefulness', since it becomes difficult to observe the level of power being introduced when using code written by others.

Oz fails to provide this support.

You cannot "make the rest of the app RT and just introduce 'now()' where you need it" if you happen to be using someone else's library; after all, you can't know they aren't also using 'now()', nor can you know it won't suddenly be introduced in a future version.

Effect typing is one mechanism to provide language support for layering 'power' into the languages. Another approach is to develop the kernel language itself in layers with syntactic restrictions on how they invoke one another (i.e. a function cannot invoke a procedure even if a function can parametrically abstract said procedure). I favor the layered kernel language design. Object capability discipline is also useful, but helps to control only the highest layer powers.

Oz does manage is to provide fine-grained primitives to help an individual developer identify the level of a few concepts being exercised locally (state and non-determinism far more so than delay and partial failure). This ability to reason about local code is useful in education or for an academic language where the projects are small, but I don't believe it to be particularly useful for reasoning about code as a language grows up and escapes the playpen.

### Straw, wood and brick

since it becomes difficult to observe the level of power being introduced when using code written by others

I think this is half strawman and half substantive.

To the extent that I think it is strawman, it is because part of understanding the interface for someone else's code is understanding how stateful its behaviour is. This is true whether your language is imperative or functional and whether it supports some kind of language typing support to identify this or not.

If you can't tell whether a particular function is deterministic or not, you've got a bigger problem than the quality of your PL, and probably indicates a human problem, not a technical one.

I think on the substantive side there are PL techniques that have been and can be developed further that might help with this. The "blame" concept of Wadler et al. is a promising approach, among other type regimes.

### Huff and Puff

part of understanding the interface for someone else's code is understanding how stateful its behaviour is [...] whether it supports some kind of language typing support to identify this or not.

I agree. The interface to code should make these properties clear.

If you can't tell whether a particular function is deterministic or not, you've got a bigger problem than the quality of your PL, and probably indicates a human problem, not a technical one.

I disagree with the philosophy you express here. One purpose of a quality PL is to help humans solve human problems.

Don't blame people for the failings of their tools. The need for dead (inactive) documentation of any sort - whether it be in providing examples, tests, interface, design, or implementation - should be considered a PL failure.

We will find that some failures of PL technology can't be helped, but we should avoid a bureaucratic solution where a technical one is feasible.

We will find that some failures of PL technology can't be helped, but we should avoid a bureaucratic solution where a technical one is feasible.

I'm afraid this is another strawman. The missing ingredient is communication, which can be as simple as a conversation or well-chosen names for functions and parameters. If less direct situations, such as when the producer of the library is not in proximity to the consumer, yes, (horrors!) documentation may even be called for.

On the other hand, if you don't trust someone else's code, should you really be using it?

Don't blame people for the failings of their tools.

On the other hand, the old saying is "It is a poor craftsman who blames his tools."

Just because tools can and should be made better doesn't mean that all problems can be solved by tools.

### Old Fables

On the other hand, the old saying is "It is a poor craftsman who blames his tools."

And often for good reason: a poor craftsman can rarely afford high-quality tools.

It is true that we need to use the tools we possess, but progress in a field is defined largely by the development of better tools.

The missing ingredient is communication

PL is communication. If communication is the missing ingredient, that's a problem.

if you don't trust someone else's code, should you really be using it?

That depends.

Do you have the spare resources and expertise to reinvent it? If so, are there other priorities to which those resources and expertise might be better applied?

People use code they don't trust. All the time. Every day.

### Mother Goose

PL is communication. If communication is the missing ingredient, that's a problem.

Funny, I remember saying something like that recently...

Are you seriously saying that software teams or library developers and consumers shouldn't have to talk to each other at all except through their code and/or interfaces?

People use code they don't trust. All the time. Every day.

And so having an annotation that tells you its side-effect free is going to suddenly make it trustworthy?

### Mr. Roboto

Are you seriously saying that software teams or library developers and consumers shouldn't have to talk to each other at all except through their code and/or interfaces?

I'm saying that's an ideal case, though perhaps you should flip that statement around and say: "communicate and discuss ideas via use of code and interfaces should be easy enough that we shouldn't be forced to use a non-functional 'natural' language".

It would certainly be nice if natural language were precise enough, (or interpreters advanced enough), that we could utilize it as code. Imagine developing, sharing, and tweaking prototypes as easily as we currently develop and share power-point slides or sketch out a concept on a white board in the office.

Humans already augment their speech with such functional aspects as addresses and phone numbers; I suspect our language will grow ever more functional as we augment our lives with computation devices, especially as voice-based interfaces become popular. And to see people attempting to do such things as express preferences and tastes as code seems to me like a potential for progress.

The ability to express such things as financial and service contracts and secure trade, determinism or transparency requirements on user interfaces, hardware interfaces, failure conditions, mechanical constraints, architectural blueprints, and so on - in code (or appropriate DSLs) - is a measure of progress.

And so having an annotation that tells you its side-effect free is going to suddenly make it trustworthy?

Nope. That only tells you it's side-effect free. (I know of no clear relationship between ability to cause side-effects and trustworthiness.)

More usefully, one may also verify - or, optionally, enforce (by, for example, removing an implementation-layer permission for output actions) - that it's side-effect free.

It would just be dead documentation if you make an annotation you cannot utilize.

### non-rt infects the rt?

in my limited experience and understanding, i have found it hard to understand how to really separate out the mutable from the immutable with great success. sort of like how const in c++ is a vine that grows ever outward, i feel like as soon as you need something non-rt you are e.g. then forced to be in the iomonad everywhere, unless you invert the dependencies by only using 'now()' at the top level and passing it in, which seems to be back to the bad design. i assume i am completely missing the point somehow, and would very much like to learn the ah-hah about all of this.

non-rt infects the rt?

I think this is the common misconception. From a "mathematical" point of view, "one drop" of RT [oops I mean state] "pollutes" the whole system, but from an engineering point of view this is a vast over-simplification.

An analogy I like to rely on is radioactive materials. Sure, if you aren't careful with radioactive materials, they will contaminate and poison everything they come in contact with, but if you treat them carefully, have robust practices and procedures for dealing with them, and are conscious of when and where you use them, they are an indispensable tool in energy generation, medical research and treatment, physics research, etc.

State just needs to be treated in a similar way.

"Dependency inversion" (at least in some of its incarnations) is one very useful technique for handling state carefully in this way, but if you have to thread state as a parameter through everything, you would have to wonder if maybe you had gone to far, or if perhaps your analysis of the problem has split its dependencies up the wrong way.

### Schedulability analysis

...is a technique used in formal real-time systems.

### Single-Assignment, More Specifically

Can a process observing a single-assignment variable distinguish between an unassigned variable and an assigned one?

If the process has an oracle, it can use that to distinguish which of two (or more) variables gets assigned first. Otherwise, it cannot do so.

Any oracle used in the program will be traceable to the arguments passed to the main procedure by the runtime system.

Can multiple processes race to assign the single-assignment variable to different values?

No.

[edit:] Is this a stateful language?

### Stateful Language

Can multiple processes race to assign the single-assignment variable to different values?

No.

Easier said than done, actually.

To guarantee that a single-assignment variable is assigned at most once (and, ideally, at least once), whilst allowing the assignment to be separate from the variable's declaration, effectively requires linear-typing on the variable's 'teller token'.

But this can be done, and probably should be if you're aiming for a safe language.

Any oracle used in the program will be traceable to the arguments passed to the main procedure by the runtime system.

I'll need a little clarification here.

Single-assignment variables are presumably introduced at arbitrary points in the program. Are you assuming the main procedure being passed some sort of 'oracle-generator' from the language runtime that, given two single-assignment variables, can produce an oracle that decides which of the two is assigned first?

I.e. something like: {{MyOracleConstructor A B} Left Right} returns Left if A is assigned before B is assigned, otherwise returns Right.

[edit:] If so, then how do you account for the functional semantics in certain cases: ({MyOracleConstructor A B} != {MyOracleConstructor C D}), but A=C and B=D.? Are you going to thread the 'world' as an extra, linear parameter?

Is this a stateful language?

The specifications thus far do not imply a stateful language. But your specification does not cleanly exclude the possibility of having stateful features that you did not list: "we have a language with one-shot logical variables, concurrent "procedures" that can fix the values of the logical variables via "teller" tokens, and a runtime system that supplies oracles".

I make a distinction between "stateful language" vs. "stateful program". A stateful program is one where establishing the program state requires the program's history or some other form of save-state. Haskell is not a stateful language, but one may easily develop stateful programs in it via integration with the Haskell IO monad [edit: or even a pure fold function over a list of user inputs]. Similarly, one can write state-free programs in a stateful language.

### Single-Assignment with Linear-typed (Oracles and Tellers)

Assume a linear type of oracle and a linear type of oracle collection. The environment invokes the main routine with an oracle collection as one of the arguments (I'm not sure whether such a collection conforms to the notion you are invoking when you use the term "generator"). The holder of an oracle collection can split it into a single oracle and another oracle collection. Or it can split it into two oracle collections. The holder of an oracle can consume it to decide which of two logical variables (to which it holds "askers") shall "first" receive assignment (the value assigned might be only partially known, e. g., (cons 2 _something_)) (the notion of "first" is local; in another context mentioning askers to the same variables, the answer might be different, and would require the consumption of a distinct oracle).

Features I did not list: number literals, arithmetic, magnitude comparison on numbers, character string literals, concatenation, substring, symbol literals, cons/car/cdr, empty list, all shallow equality tests (but not on tellers), type tests (not consuming linear things tested), procedure definition that can close on variables. The body of a procedure is an unordered collection of commands. A command can call a procedure or unify a teller to an expression or do any of the operations I describe for an oracle or any operation I describe as a "primitive".

As you surmised, single-assignment variables can be introduced at arbitrary points in the program. This gives rise to an asker and a teller. The asker can be copied or destroyed arbitrarily.

As you point out, some of the data paths in the language I'm describing need to be linear typed. I think this can be enforced statically or dynamically. I think that dynamic enforcement is easier to describe. I can just say that attempting to copy a linear argument or variable (a variable whose assignment turns out to be linear) by simple replication of the identifier for it is a runtime error. For the time being, let's say that a teller can be dropped on the floor, in which case its variable will never receive assignment.

A closure or a cons that has anything inside it (found at whatever depth of nesting) that is linear, has to be treated as linear. I think to include an "imperfect copy" primitive that given any argument, generates two "copies", one of which exactly reflects the original, and the other of which copies all the original's copyable aspects but substitutes askers for tellers (seeing oracles as tellers). I think the imperfect copy might make a better language, but is't essential to the main purpose for which I'm putting up this description in this context in this conversation. If dmbarbour says this language is not stateful, I will argue that neither is ToonTalk.

I think I haven't left anything out now.

Your linear types (oracle pools, etc.) are borderline since they're ultimately not constructed 'within' the language. But I'd say that language itself isn't stateful. Any stateful program will be by interpretation that threads state with IO in an outer loop. Your oracles don't contribute to statefulness one way or the other, but do allow non-determinism to be introduced (each oracle introduces a single bit of non-determinism).

But to argue that therefore neither is ToonTalk stateful? I am skeptical. ToonTalk's boxes have no analog in the language you present.

### Translation from ToonTalk

Where I said ToonTalk, I meant, "ToonTalk minus 'sensors'.". I will continue to refer to ToonTalk without sensors as simply "ToonTalk". What I think is [edit: thought was] interesting about ToonTalk for computer science is the bird/nest communication scheme, and the fact that ToonTalk is one of the few concurrent constraint languages that is easy for any of us to play with (although it does require Microsoft Windows). Van Roy mentions that one of the chapters of CTM describes a concurrent constraint language with single assignment, and I suppose it is easy enough to download Oz and play with it, sticking to the subset described in that chapter. [withdrawn: "However, I think ToonTalk is more interesting than that subset of Oz in that ToonTalk separates askers from tellers and Oz does not." As Van Roy mentions below, Oz now offers this separation.]

For analyzing ToonTalk (TT), I promote a viewpoint from which we can see all TT objects as linear-typed. The animation may make it look like when we drop an object in a hole in a box (for example), we end up with the "same" box only mutated by having had the object dropped into it. And that is the way it would be described to a child programmer. However, as computer scientists we can instead take the viewpoint that the interaction consumed the box we had and left us with a new box.

TT boxes are essentially indexable tuples. Each slot contains either Emptiness or a TT passable object (Tooly, for example, cannot be picked up and dropped into a box, but most objects you see, can be). In the stateless concurrent constraint language I described above (which I claim is also RT), there are two structures that could be used to represent TT objects: lists and procedures (in fact, I could have left lists out and you would be able to construct them with procedure closures). Using procedures (constructed with closure) to represent TT objects would make a more OO style; however, I think the scheme of using lists is easier to describe for this discussion.

In describing the stateless single-assignment concurrent-constraint language, I did not include any indexable structure. However, we know we can make those out of recursive algorithms on lists.

I think the rest of the Box story should now be obvious.

If I had introduced a functional array type, I don't think that would have influenced your determination that the language is stateless.

The functionality of Tooly has to be replicated for all contexts. However the initial state of a TT interpretation is to be set up in my language, the ultimate source of the oracle pool in any appearance of Tooly must be the oracle pool passed into the main procedure by the runtime environment surrounding an execution of code in the languae I described. A Truck must contain an oracle pool so as to extend this Tooly functionality to the new process. This is easily achieved if we think of Tooly's instances in terms of a linear type. Extracting a Truck from Tooly consumes that instance of Tooly and creates a new one. We split the oracle pool from the Tooly we started with, and include one of the new oracle pools in the new Tooly and the other in the truck.

I represent a bird as a list ['Bird', teller for tail of message stream, oracle pool]. Applying Maggie to a bird:

 MaggiefyBird   original_bird: ob first_new_bird: nb1! second_new_bird: nb2! ->   nb1! := [Bird tail1! op1]   nb2! := [Bird tail2! op2]   [_ orig_tail! orig_pool!] := ob   orig_pool splitIntoPools: intermediate_pool! and: merging_pool!   intermediate_pool splitIntoPools: op1! and: op2!   Merge     usingPool: a_pool     source1: source1     source2: source2     destination: destination -> (       a_pool splitIntoPool: new_pool! andOracle: decider!       verdict! := decider who_is_first?: source1 or: source2       if verdict = Left         datum! := source1 car         new_sourec1! := source1 cdr         new_source2! := source2       else         datum! := source2 car         new_source1! := source2 cdr         new_source2! := source1       fi     Cons cons: destination car: datum cdr: new_destination     Merge       usingPool: new_pool       source1: new_source1       source2: new_source2       destination: new_destination!   )   Merge     usingPool: merging_pool     source1: tail1     source2: tail2     destination: orig_tail 

### Turing Equivalent != Stateful

Keep in mind that you could do this same sort of thing - linear typing of the heap and stacks and such, producing a logical copy of these plus some delta after each step, in order to describe C in your language. C "without sensors" (input) would be especially easy to describe this way, would it not? Even C's IO streams or Tooly with sensors could be supported, given some linear representation of 'infinite IO streams' or 'world' on the main loop.

What you argue is Turing equivalence. Turing equivalence is just a minor point in a discussion on expressiveness. Any Turing complete language can represent a state abstraction. Stateful languages manage to do so in a local, modular fashion.

However, I think ToonTalk is more interesting than that subset of Oz in that ToonTalk separates askers from tellers and Oz does not.

I vaguely recall reading that Oz has since added support for distinguishing the two in order to help catch mistakes. But I don't use the language, so I haven't bothered learning the syntax for this feature. I had the impression it was an op to split the variable after declaration, but it might be a distinct operation for unify (i.e. a right-to-left-only variation of '=').

### Arguing from "Direct" Mappings Between Programming Languages

Well, yes, you can do that, but there is a difference between pointing out that any Turing-equivalent language can interpret any other, and pointing to an interpretive rule that is very direct. In the Lucy/Janus paper, Kahn and Saraswat showed a simple translation from the Actor model of Hewitt and Agha to the language they introduced, Lucy. The actor model had been described in terms of its effects, but casting it into Lucy exposed a way of seeing a meaning of the operations (bag semantics). Kahn and Saraswat then went on to show that Lucy was a crippled subset of Janus. Therefore, they reasoned, the actor model is a crippled subset of a more expressive language, Janus. Are you saying that their showing of a direct equivalence between the Actor model and their Lucy language had no more significance for understanding what the Actor model means and is and its fundamental characteristics, than I would be signifying about C by pointing out that it is possible to interpret C in Haskell?

In regard to ToonTalk's Sensors: The reason I avoid talking about them is not that they bring input. A system could use birds and nests to communicate with the I/O devices. My problem with sensors is that they seem to break the idea that the firing of a robot (a robot is simply a rule) can be seen as an atomic transaction.

### Direct Interpretive Rule

You can actually make interpretation of C in Haskell look quite nice by abundant use of monads, you know. Haskell even makes it easy to hide the fact that you're logically passing the entire heap and multi-thread state from step to step...

You're achieving a similar benefit by abundant use of oracles to ensure progress in the face of random tellers being dropped to the floor. It would be even nicer if you made it easy to hide the fact that you're splitting and passing those oracle pools around and applying them to every single stream access...

This isn't a bad thing! Limiting expressiveness of the host language often buys you some useful features at a relatively minor cost. Further, that loss of expressiveness may be recovered in a higher language layer or greatly mitigated by support for EDSLs or syntactic extension, coding disciplines, non-local 'software design patterns', etc.

But once you involve a non-local coding discipline, design pattern, or transform, you're effectively writing code in a new language - albeit potentially one with an awfully obtuse, non-local, repetitive syntax. The EDSLs tends to make it much more obvious that you're writing in another language, and reduce repetition and associated risk of error.

By chaining oracles to main, you don't need to thread state through the main loop. This offers the illusion that the 'state' abstraction is local. But this illusion was only achieved by use of a non-local discipline and transform. It is not a "direct interpretive rule".

Oz has since added support for distinguishing the two

In Oz there is a function that lets you create a read-only view of a variable:

  declare X Y in
Y=!!X


Here, X is an unbound variable and Y is a read-only view of this variable. Attempting to bind Y will suspend until another thread binds it. This can be understood in terms of capabilities: standard unbound variables have two capabilities (read and bind), read-only variables have only a read capability.

Thank you.

### Functional-reactive widgets

Functional-reactive widgets and forms (data-binding and transformation), with objects or procedures to receive and process user inputs, are of greater interest to me than this oracular/functional approach.

Can this work in a framework or notation that honors Referential Transparency (RT)?

Does this work by the program informing the widget something to "do" when someone operates it? If so, what kinds of arguments get passed toward the widget in specifying this to it?

### Reactive Widgets Overview

Can this work in a framework or notation that honors Referential Transparency (RT)?

There is no state internal to the notation, if that's what you're asking. The functional reactive approach does not require a document object model or retained mode elements, and I'm of the opinion it would be severely harmed by such stateful features, were they included.

That said, a functional reactive design isn't worth much if the 'environment' in which it appears is stateless. This isn't a problem because real environments (sensors, actuators, services) are far from entirely stateless.

Does this work by the program informing the widget something to "do" when someone operates it? If so, what kinds of arguments get passed toward the widget in specifying this to it?

Not quite. Saying that the program "informs the widget" suggests that
"the widget" exists as a stateful entity within or separate from "the program". This is not the case.

Instead, the widget is 'described' by a functional expression modified with flexible bindings to external data resources. For example, a text widget might be bound to the current stock prices reported by an external service. As the external resource changes, so does the widget. A web-browser or similar application is responsible for gathering and keeping up-to-date the external data resources, computing the widget descriptions, and displaying everything to the user.

Part of each widget's description is what to do when a user interacts with the widget: a variety of 'onClick', 'onDragAndDrop', etc. specifications are provided alongside the visual description. Ideally, these are also described reactively - i.e. a procedure whose behavior can vary based on external resources. Which procedures exist, and how they are parameterized, depends heavily upon the browser specification and framework. For example, 'onClick' might be parameterized by the precise mouse coordinates within the window, or might not be.

There is no specification within these procedures of how the actions taken by the procedures will influence the displayed widgets. Instead, any such influence must occur through any implicit loopback: the procedures influence external systems, which affects their state, which in turn affects the widgets bound to the external systems. That is, at least, the ideal case; in practice, the browser may need to guess or be informed of 'anticipated' changes to external resources to help keep the UI snappy in event of network delays.

Special support may be included for navigation, view transforms, and like behavior that interact with the browser. (Ideally, a view transform is identical to navigation, such that views may be bookmarked. Ideally, one can navigate to arbitrary functional-reactive expressions, as opposed to navigating only to URIs that may return such expressions.)

Usefully, this design is composable and decomposable and works well with RESTful and publish-subscribe architectures (and under temporary disruption). Mashups are natural: you can extract (functionally or by hand) individual widgets from external UI composites, then put them together into a new composite (the mashup), and interactions with them will be identical as if they were still embedded within their original composite UI. One may (if the browser allows) also embed other UI applications or link to them, akin to <iframe> and <a>.

Concurrent and distributed sharing of applications works easily since there is no 'client-side' state. For performance and disruption tolerance, though, client-side state is often useful. Ideally, this is performed by distributing or mirroring part of the server upon the client host. This would allow a new form or whatnot to guide a user through decision trees and such without continuous network events. My interest in automatic distribution and security is in part for this purpose.

By reactively computing of UI composites (i.e. a form with multiple buttons and checkboxes and sliders), the set of widgets may be dynamic, based on real-time changes in real systems. For these cases, the browser must adhere to certain principles and be careful to not change which 'widget' is under the mouse or has focus. But this is a very useful feature when dealing with systems of systems.

The fudgets approach assumes a very different architecture and environment than does the functional reactive approach. Fudgets emulate a retained-mode application architecture, making the Fudget UI 'stateful' in the sense that the display cannot be regenerated without a complete history of interactions (i.e. the stream of oracles up to that point).

### Fudgets emulate a

Fudgets emulate a retained-mode application architecture, making the Fudget UI 'stateful' in the sense that the display cannot be regenerated without a complete history of interactions

Isn't any functional reactive structure also a retained mode architecture? It certainly seems to fit that definition.

### Many popular retained-mode

Many popular retained-mode architectures are intrinsically imperative (say WPF): we mutate scene graphs and object attributes to control rendering indirectly.

Functional reactive programming is declarative retained-mode: the scene graph and object attributes are specified through some kind of declarative binding construct; e.g., transforming values or using combinators like arrows. SuperGlue is another example, as it uses declarative rules, while databinding in WPF is a semi-example: a databinding is declarative but installing a databinding is imperative and the property can always be assigned imperatively.

### Functional reactive

Functional reactive programming is declarative retained-mode: the scene graph and object attributes are specified through some kind of declarative binding construct

That's what I thought. It just sounded like David was implying that the difference between Fudgets and FRP was a retained-mode UI; they're both retained-mode architectures, just with different approaches to declaring dependencies and managing changes.

### Retained Mode is One Difference.

I wouldn't use the words 'the difference' by any means, as though there were only one.

Sean and I seem to emphasize different aspects of what it means to be 'retained mode' - for example, I emphasize where state is being maintained. If you review the Wikipedia def, you'll see: "client calls do not directly cause actual rendering, but instead update an internal model (typically a list of objects) which is maintained within the [graphics] library's data space".

I suspect Sean emphasizes the phase-separation of describing a scene-graph then displaying it, i.e. that "client calls do not directly cause actual rendering". Or, perhaps, he emphasizes that the graphics system is informed by a 'model' of what to render.

Sean has loosely described a 'declarative' variation of 'retained mode'. I'm somewhat curious how Sean defines a corresponding 'declarative' variation of 'immediate mode' such that functional reactive is excluded. I'm not apt to agree at the moment with Sean's understanding of these definitions, but I accept that a quibble over definitions isn't a big deal. He's certainly correct to point out the various semi-examples like WPF.

To me, pure functional reactive is 'immediate mode' because the graphics API maintains no meaningful state from frame to frame. An implementation might cache a little (or even a lot), but that's optimization and is not required of the semantics. Property for property, under such conditions as disruption and distribution and partial failure, functional reactive is more similar to immediate mode than to retained mode.

### First, I don't think wiki's

First, I don't think wiki's def is very accurate or descriptive, it makes the mistake of assigning incidental properties (state free) to what it means to be retained or immediate.

Immediate mode rendering API as in DirectX 10: you get called to draw things, there may or may not be state involved in that on your part. Actually, since the renderer isn't doing much for you, you probably have to maintain a lot of imperative state on your own. The main advantage of immediate mode rendering is that you control when to redraw and what to redraw, and you access the rendering context directly (almost complete control). Perhaps you are referring to the rendering system as not having any state, which is only kind of true, as directX 10 will remember your configuration data between frames.

Retained mode rendering API as in WPF: the UI and rendering threads are completely separated. You don't get called during rendering, and can only influence by manipulating the rendering graph. Because the rendering API is maintaining a lot of state, you can get away with writing a complete UI without explicitly manipulating any state at all (e.g., FRP or through databinding). You could also get away with this in an immediate mode API, although the result would necessarily be just a static picture.

When I talk to teams at Microsoft, this is how they distinguish between the two APIs. Direct2D, for example, is an immediate mode API (like Direct3D) for rendering in 2D, a high performance alternative to WPF. Immediate has the reputation of being hard and fast, while retained has a reputation for being easy and slow. FRP is the anti-thesis of immediate mode rendering as it abstracts away the details of the rendering process, it is very much a retained API if we compare its similarities to WPF and DirectX 10.

Of course, terminology is culture specific. Its completely reasonable to say immediate = state free and retained = stateful, and this kind of makes sense given the words themselves. But its not how the terms have evolved to be used in the graphics field by any means.

### Opacity, Grouping, Transforms, etc.

I somewhat like your explanation and agree David is relying too much on the Quantum Encyclopedia as an authoritative source.

In retained mode, the graphics subsystem responsible for sending rendering instructions to the device layer. It is also responsible for persisting the rendering of all objects in the system. This consequence leads to the need for lazy evaluation mechanisms such as "UI Virtualization" and "Data Virtualization", where objects are only loaded into the graphics subsystem's rendering core on demand/anticipation of beind demanded. Some loading is required, such as loading an object if the object in front of it does not have a 100% opaque opacity (this might not be the actual "if" clause logic, but intuitively it is how you should reason about the retained mode model). Data Virtualization is a finer-grained optimization over UI virtualization, where the scene graph's object graph management is smart enough to realize it should not load its entire resource stream. Instead, it should dynamically determine the amount and section of resources to pull based on retained mode constraints.

Furthermore, you can do other fancy object graph management with UI Virtualization itself. For example, you can pool a collection of composite containers at a particular depth in the scene graph hierarchy. WPF refers to this as UI Container "Recycling".

Finally, getting dynamic grouping right in object graph scene management, as well as streaming the right data into memory for the object graph's rendering constraints, is non-trivial and completely busted in WPF, and just totally hackish. "Controls" check for the presence of certain attached properties to provide virtualization helpers, but this does not solve real performance problems, only covers them up. In order to collaborate with the imperative Binding / dynamic Grouping feature in WPF, the retained mode system has to use some information available to it. The best information it has is to render by "data item" from your application's object model. For grouping, this does not make much sense or provide enough information. As a result, WPF is forced to eagerly stream resources and not use UI virtualization when you do grouping; instead it uses pixel-by-pixel scrolling. This is bad because it makes reasoning about resource consumption in your App difficult, and tracking down resource leaks are a PAIN IN THE NECK.

One last nit, that I am sure you know but others might not understand:

Retained mode rendering API as in WPF: the UI and rendering threads are completely separated. You don't get called during rendering, and can only influence by manipulating the rendering graph.

The UI thread indirectly influences the rendering process through a complicated system of callbacks that get triggered as changes propagate through the UI graph. These changes send messages to the rendering thread that an invalidation occurred. This is not much different from GDI/GDI+ sending Invalidate() messages, except that the flow of information is in the opposite direction. This allows the UI to maintain a serialized tree of drawing primitives and high-level abstractions like "Visual"-typed objects.

Edit: One consequence of retained mode systems is that the kernel driving the retained mode process cannot possibly anticipate certain needs, such as the scene graph management of a parralax scroller. However, parralax scrollers also happen to be a problem domain well-suited to "easy and slow" applications... unless of course you are trying to build a game in Silverlight without hardware acceleration.

Edit #2: In summary, in WPF, the retained mode system does not retain the visual rendering of objects, but rather the visual objects themselves. Things like UI Virtualization sit in front of this process to provide a sort of "Wait-based performance tuning", where you create more visual objects (cached in a buffer) so that they are displayed as demanded. With Immediate Mode, there is no such thing as such optimizations, as the programmer is free to decide on his own policies for performance tuning a scene graph. This means that in Retained Mode, the primitives are serialized objects, whereas in Immediate mode, the primitives are rendering instructions.

### Re: 'Quantum Encyclopedia' dig

I somewhat like your explanation and agree David is relying too much on the Quantum Encyclopedia as an authoritative source.

I am not doing so. I used Wikipedia's def only because Sandro referenced it in his earlier statement. I felt some need to point out that his "this looks a lot like retained mode" was far from clear even by the definition he linked.

My own understanding of 'immediate mode' vs. 'retained mode' comes from outside the Microsoft community. I maintain that the semantics of a (pure) FRP UI are equivalent to Immediate Mode. That is, you express in each frame what is to be displayed, without reference to older state. Similarly, like immediate mode, there is no maintained object identity for 'widgets' and 'menu items' (and, ideally, not even 'windows') semantically, those are reproduced after each frame, and any maintained state is optimization maintained within the semantics.

I don't accept Sean's objection about control over rendering details... one would have the same concerns for control over detail offered by any immediate-mode API. Most rendering APIs "abstract away the details of the rendering process". So do most models of things to render. You're just as free to support pixel-accurate models of things to render as you are to support pixel-accurate imperative rendering.

There are, of course, many impure variations on FRP. Bling and WPF are among them. I've seen models where you can (within the FRP notation) describe step-functions that are influenced by both prior states and live inputs. These might be much more 'retained mode' like. Most of Sean's objection points to WPF, which I don't really consider relevant to my conclusion since WPF is not a pure FRP UI.

But all this is a minor nit on definitions, anyway. The difference between fudget UI and FRP UI is much wider than retained mode (widgets with object identity) vs. immediate mode, despite both being based in functional programming.

### Sorry

My main point was to illustrate accidental complexities that can lead to why people have a hard time understanding retained mode programming.

I forgot to mention that I see FRP more as a general technique than as an example of retained mode or immediate mode. I think that is a false alignment not worth arguing over. I didn't even bring up FRP in my previous post. [Edit: I don't think you should align FRP to either side, for reasons hidden in my previous post.]

I don't always like how FRP papers describe how their model works. In my mind, cause and effect are inverted by description. Bottom line.

### Neither Side

I don't think you should align FRP to either side, for reasons hidden in my previous post.

I'd accept that position, too.

Nonetheless, fudgets are pretty clearly 'retained mode' in the sense that developers express widget futures as functions of their histories.

### Mostly, we are arguing about

Mostly, we are arguing about terminology vs. content. My main point is that if you use "immediate-mode rendering" in an explanation of FRP to any graphics guy (not just Microsoft...), they'll get really confused because you are using terminology in a way that they aren't used to. Immediate mode rendering to them...abstracts away as few rendering details as possible, it's a pretty minimal wrapper around the graphics card.

I'm happy not to use the terms at all, since they don't really help to distinguish what FRP is.

### I felt some need to point

I felt some need to point out that his "this looks a lot like retained mode" was far from clear even by the definition he linked.

My understanding was very close to what Sean articulated, re: one has only indirect control over rendering by manipulating higher-level objects. FRP does have access to history via delay combinators, so assuming I understand your characterization, I'm not sure it's 100% faithful.

### RE: Delay Combinators

'Delay' and delay combinators are not part of a pure FRP notation any more than explicit mutation is part of a pure functional notation. In pure FRP, the only way to obtain access to a history is if an external data resource, such as a database, just happens to be keeping a history.

Of course, similar to how the vast majority of functional languages are impure, the same can be said of FRP languages. Many impure FRP languages do support delay combinators. Many also allow explicit mutation and other side effects to be emitted by functions.

re: one has only indirect control over rendering by manipulating higher-level objects

Indeed. In retained mode, you and the renderer share access to a set of objects, and you manipulate rendering by manipulating this set of objects. As a classic example, there might be a 'button' object, and if you change the state of that button it will change what is rendered. I do insist that the the whole "high level" idea is not relevant; the shared set of objects may also be very low level... it's just a matter of abstraction.

But in (pure) FRP, you are not manipulating objects. You are reflecting objects. They aren't even shared objects, and in general have nothing to do with abstractions the renderer understands. That is certainly not 'retained mode'. Sean and John insist this is also not 'immediate mode', and I'm willing to accept that.

By comparison, fudgets are a classic example of retained mode. Developers manipulate state abstractions like the text of a 'button' object, and that affects rendering. It just happens that these state abstractions are implemented functionally.

### Think in terms of representations of objects

Indeed. In retained mode, you and the renderer share access to a set of objects, and you manipulate rendering by manipulating this set of objects.

While it is certainly possible to tightly couple the rendering and UI in the way you suggest, it can be more beneficial to separate out these two locus of controls. The UI, from the application programmer's perspective, manipulates a shadow of what is on the screen. Thus, they may not share anything. Sharing memory is an implementation detail. [Edit: I could argue earlier versions of DirectX made this much more obvious, but don't want to crawl through that muck, as that technology is ancient for a reason.]

### Shared Objects

It is unclear to me why you assume sharing access to a set of objects means sharing memory.

### A variant of 'collect' is in

A variant of 'collect' is in the core of any real FRP system that I've heard of and is explicitly for collecting state over time, akin to folding over a list.

I believe you knew this, so I'm lost on the nuance in this conversation that excludes it from consideration.

[[Edit: I'm not sure how any of this is relevant; we can play a game of distinguishing between program state and rendering state, at which point retaining state is a question of APIs or who is implementing them, and they might not even agree]]

### Collect

In FRP, behaviors are continuous, time-varying values. The semantics of each expression is yet another continuous, time-varying value. In a 'pure' FRP system, the semantics are no more than this (no side-effects) and are independent of the origin of the FRP expression (referential transparency).

A 'collect' primitive, first of all, simply doesn't fit into FRP. Any attempt to discretize a continuous, time-varying value (i.e. mouse position) into a 'history' that may be folded over will be incomplete. Worse, the semantics of each expression would suddenly depend upon how it has been actively maintained within the runtime... i.e. the same FRP expression collecting some arbitrary computation over 'mouse position', could provide very different behavior if one of them began collecting two seconds ago and the other began collecting two minutes ago. Thus, collect violates referential transparency.

Any implementation of a 'collect' primitive requires treating FRP expressions as objects with internal state that changes over the course of computing its current behavior. It, simply put, cannot exist in a 'pure' FRP system.

OTOH, if 'mouse position' itself maintained some concept of history (i.e. a three second buffer), that'd be okay - it's not part of the FRP system. Similarly, if your runtime supports 'mouse position three seconds ago' as a data resource, that's still from the 'outside' so its fair game. But such a feature generally cannot be implemented across arbitrary (especially distributed or persistent) data resources.

Pointing out the "real FRP systems that you've heard of" isn't all that useful for judgements about pure FRP. Chances are, none of them are pure. We also had functional languages for decades before someone bothered developing Haskell - a 'real' functional system that happened also to be 'pure'.

There are cousin forms to FRP. It may also be you're thinking of one of those, such as stream processing.

### A 'collect' primitive,

A 'collect' primitive, first of all, simply doesn't fit into FRP. Any attempt to discretize a continuous, time-varying value (i.e. mouse position) into a 'history' that may be folded over will be incomplete.

First, FRP systems often also have values defined at only discrete points in time. Second, the continuous analogue of 'collect' is 'integral'. Typically, 'changes o collect (f) o hold' is the way it is really thought about.

I see no grounds for claiming it is not FRP and that you are thinking about something else (I am making no judgment call about whether that something else is good or bad, but, will say, in practice, collect or something like it is essential).

Worse, the semantics of each expression would suddenly depend upon how it has been actively maintained within the runtime... i.e. the same FRP expression collecting some arbitrary computation over 'mouse position', could provide very different behavior if one of them began collecting two seconds ago and the other began collecting two minutes ago. Thus, collect violates referential transparency.

No. It just means the history of a value can be computed over. Pass in a different value and of course you'll get a different output. Pass in the same value (over time or some other infinite dimension), get the same output (over time or some other infinite dimension). Referential transparency is preserved.

Some people make a big deal about time being 'handled' in FRP and try to 'remove' it, but my perspective is that it enables you to model computations over time by enabling you to monotonically compute over values continuous in one dimension (and/or discretized in a synchronous way). Baking in the notion of time is a little different and matters more when you want optimizations like sampling. This is a big divide between the Conal Elliott's work and that of almost everyone else. However, both have useful traits (the former leaning towards interoperable stuff like Rx and the latter for domain-specific optimizations you want to do), and both are still RT. The nice thing, which enables RT, is the dependency on time is encapsulated within values.

Semantic problems, including potentially for RT, occur once there are cycles / recursive forms, but typical FRP systems skirt around this problem by disallowing most if not all of them.

### FRP without binding?

It just means the history of a value can be computed over.

To the degree that a history is provided to you, of course you can compute over it. You can even wrap it up into a little function of the form lambda(Time).MouseHistory, at which point you are free to lift it into compositions, integrations, collections, etc. with an implicit time parameter.

If that's all you were talking about, then we don't have a disagreement.

But, to me, it looked like you were saying expressions could 'collect' over the history of a data binding (as opposed to binding of a history). And to 'collect' over the history of a data binding itself would be a severe violation of referential transparency, right up there with emitting side-effects whenever the bound behavior 'changes'.

Your references to "real FRP systems" weren't clarifying anything, because many popular FRP implementations do allow you to violate transparency, emit side-effects on behavior changes, and so on.

Some people make a big deal about time being 'handled' in FRP and try to 'remove' it, but my perspective is that it enables you to model computations over time by enabling you to monotonically compute over values continuous in one dimension (and/or discretized in a synchronous way).

I exclude 'time' because it's a poorly defined concept (even in real life), integrates poorly with computation systems (multi-version concurrency control, disruption tolerance, distributed systems, sytems without clocks, etc.), and is irrelevant to the essence of FRP.

FRP, at its essence, is simply continuous observation over the state of a system.

To the degree that one keeps a history of the system available one may, of course, also compute over older states of systems. But, inherently, said history as a data resource must be part of the current observed state of the system. In some systems, one could totally rewrite the history books from one moment to the next.

I have no objection to having functions as behaviors. If functions are first-class, it follows naturally that lambda(Time).Whatever is a perfectly acceptable 'continuous, time-varying value' for FRP binding. And, as noted above, one could easily attempt 'integrals' and 'collection' over such behaviors.

But, to be clear, I don't consider working with lambda(Time), without binding, to be meaningfully distinct from functional programming. Conal agrees; he states, "FRP is an application (instance) of functional programming." That is, he considers FP and FRP to be indistinct, barring a small library. This is not a view I have seen from others, nor is it one I share.

To me, FRP is distinguished by modular support for data-binding. Data binding itself is distinguished from 'fetch' behavior by cause and effect being inverted: a change to the source causes a change in the effect, no need for polling or fetching. That is, to have FRP, you need to support subscriptions to data resources.

Baking in the notion of time is a little different and matters more when you want optimizations like sampling.

I agree. Indeed, for something like an FRP UI, I'd still suggest providing the render-result as a function of time. This is greatly useful for animations and keeps costs down quite a bit.

But I consider the final lamdba(Time).SceneGraph (or similar) to simply be a 'current behavior', not FRP but rather produced as the result of potential FRP bindings.

### .

[deleted] opinion is more clearly expressed in later post.

### what to do when a user interacts

Part of each widget's description is what to do when a user interacts with the widget

That sounds imperative.

I looked at "Modeling Interactive 3D and Multimedia Animation with an Embedded Language" by Conal Elliott; that seems to be one of the early papers on the functional-reactive approach. But it seems to just talk about animation; I still don't see how input devices are supposed to interact with the (rest of the) program.

Perhaps a simple code example using only a keyboard and a typewriter would help explain the approach. Suppose we want a program that asks the human for their name. If I type "Jack Waugh" and hit return, the typewriter should print "GALLED to meet you, Jack Waugh" and throw a carriage return and line feed. However, if I don't respond within 10 minutes, it should say "Hey, I don't have all day!".

### Loose UI <-> Service Coupling in Reactive UI

You use the words "the (rest of the) program", which implies that you assume a single monolithic program - a tight coupling between the UI and the service definition.

Functional Reactive UI assumes a loose coupling between the service and UI subsystems. The above 'Hello, [Your Name]' program would be broken into client and server components.

I'll offer a little code, though the rendering side of things is a bit much at this point.

Assume a simple 'console service' is described as exposing a list of prompts and lines. A prompt can be answered by a string and provides a destination to which to send the answer (a 'Name').

 define ConsoleState = List of (prompt(id:Name q:String) | line(String)).

Unlike a Unix console, more than one prompt may be active at a time, though the HelloWorld example doesn't require it. One might wish also to support more complex structures than strings in order to support colors, drill-down detail, filtering, priority, structured queries, decision trees, etc. But this is a 'simple' console service, differing from the Unix variation to better support concurrency.

Now, our HelloWorld program is broken into two components. First, we have our server... a rather trivial service exposing only a read-only mutable ConsoleState.

 define HelloWorldService = {configure State Update: State = {cell fresh};; State is fresh | answered(String) Update = {actor {fn: Message => {proc {send State.update \_ => answered(Message)}}}} ;; Need to convert from fresh|answered to Console's prompt|line define ToConsoleElt = {fn: fresh => prompt(id:Update q:"What is your name, human?") | answered(S) => line("GALLED to meet you, " ++ S) } export {bind {fn: X => [X]} ;; wrap element into list. {bind ToConsoleElt State.access}} };; end HelloWorldService

Ultimately, all the above stuff is described in a fairly imperative manner. Despite that, neither the abstract simple console service nor the concrete HelloWorld console service involve themselves with 'widgets' or 'rendering' of any sort. These services are distinct from the UI, quite possibly not even hosted on the same machine or under the same administration as the UI.

In a functional reactive UI, the 'functional reactive' aspect is responsible for mapping the ConsoleState to something that can be rendered. Exactly how this is mapped depends upon the renderer, of course, but a possible example is mapping to a simple variation of HTML. For one given render transform, it might look something like:

<!-- after you click [Answer], a floating frame pops up: -->
~A:(big text area) [Submit]~

In this case, the 'Submit' button would need to map back to the original 'id:Name' (a callback actor) in the prompt. This isn't easy to do with HTML (since it involves all sorts of server and URI mapping magic, which is why Waterken exists), but a 'variation' on HTML could simply represent it as an in-language procedure that takes the string from the text area and sends it to the associated Name.

Upon submitting, the associated 'Update' actor receives a message "Jack Waugh". It promptly updates its internal cell from 'fresh' to 'answered("Jack Waugh"), and this in turn binds the exported ConsoleState to the list of a single element - [line("GALLED to meet you, Jack Waugh")]. This line would be appropriately rendered to the screen in realtime. The "Q: What is your name, human?" prompt would disappear, since it is no longer part of the exposed 'console_state'.

Your problem also includes timeouts. To hook in a timeout, you'd need to abstract the new HelloWorldService definition as taking a Timer service as input upon construction (unless timers are an ambient authority in your language). Alternatively, you might export a record with the necessary elements to inject a timeout. An example of the former design:

 define HelloWorldService = \ TimeoutService -> {configure State Update OnTimeout: State = {cell fresh};; State is fresh | answered(String) | timeout Update = {actor {fn: Message => {proc {send State.update \_ => answered(Message)}}}} OnTimeout = {actor {fn: _ => {proc {send State.update {fn: answered(X) => answered(X) | _ => timeout}}}}} ;; Add the timeout operation. This maintains the state if ;; it is already answered, otherwise transitions it to timeout. {init {proc {send TimeoutService once(in:10minutes tgt:OnTimeout) }}} ;; Need to convert from fresh|answered to Console's prompt|line define ToConsoleElt = {fn: fresh => prompt(id:Update q:"What is your name, human?") | answered(S) => line("GALLED to meet you, " ++ S) | timeout => line("Hey, I don't have all day!") } export {bind {fn: X => [X]} ;; wrap element into list. {bind ToConsoleElt State.access}} };; end HelloWorldService

Again, this timeout becomes part of the service definition, fully separated from the UI. If you meant for the timeout to still keep the prompt available, you could modify the above accordingly.

For a toy service such as HelloWorld, it's hard to see what the functional reactive UI buys you. It makes a lot more sense when integrating real services, sensors, and actuators, especially in a concurrency scenario (where multiple users may interact or coordinate interactions with these services), composition scenarios (where you wish to develop a new UI from elements of other ones), etc.

The above FRP HelloWorldService would still work in a multi-user concurrency scenario. If the UI is rendered concurrently to different renderers, they'll also timeout simultaneously (modulo communications latency), or update simultaneously (so your console might update to "GALLED to meet you, dmbarbour" even if you don't submit anything, because I do submit something). Of course, there is no coordination between users of the simple ConsoleService at the moment (i.e. no ability to indicate to other users that you're going to answer a particular query)... and, for HelloWorldService, the last to Submit would 'win'.

By comparison, the typical 'fudgets' design would force concurrent users to each instantiate their own service, rather than sharing it. This is bad, because most useful (non-toy) services cannot be readily 'duplicated', and hooking up to them is non-trivial.

"Modeling Interactive 3D and Multimedia Animation with an Embedded Language" by Conal Elliott

Conal does a lot of interesting work, but he does tend to focus on output rather than input.

### Part of each widget's

Part of each widget's description is what to do when a user interacts with the widget

That sounds imperative.

No, it could be a decision engine supplied with a set of constraints. It could be an object-oriented constraint system like ConstraintLisp or through object-oriented Asynchronous Constraint Solving, and not functional reactive programming.

Is constraint programming imperative to you?

### Is constraint programming imperative to you?

No, you're right.

### Functional-reactive

Your examples and discussion of this approach speak of applying it to human-computer interface. The brief of it seems to be that we:
* conceive the UI as reflecting states from time to time,
* specify formally a type of all possible states for the given UI,
* choose a renderer that will show the human a concrete representation that in fact only reflects what state the UI is in from time to time,
* and finally, write a backend state machine giving the rules for the transitions between states.

Do you see the approach as useful only for computer-human interaction, or would you see it as more generally applicable? For example, would you think it suitable for writing a disk driver? Would it be suitable for all aspects of a system that are easy to conceive as going through states? For example, a bank account? Does the functional-reactive approach and viewpoint take the place of all other I/O models? On Unix, if I say something like "cat a b | sort", there is a data path crossing the "|" that the cat command thinks is output to the external world and the sort command thinks is input from its external world. Would it make good sense to rewrite cat and sort using a functional-reactive approach to what they see as I/O?

### A few years ago, I wrote a

A few years ago, I wrote a functional reactive persistent store for AJAX apps, so yes, it's not just about UIs. UIs just happen to be one of the most common 'reactive' and highly-concurrent systems where the benefits of structured programming are most clearly felt. The Yale group has experimented with robots, the synthesized and interative music community uses similar notions, and there even is a stage lighting system using the stuff.

In common usage, an easy way to think about FRP is as a higher-order data flow language (e.g., Scheme/ML/JavaScript where values you pass may include nodes, and nodes may contain other nodes). Shell scripting, for a great part, fits well into this model. Instead of using Bash to write programs over the pipe computations, you get a language that more easily preserves the 'piping' nature.

### This was one of the blockers

This was one of the blockers on my thesis: I kept trying to expand the FRP/dataflow paradigm outside of anything that wasn't related to the real world state and change. I came to the conclusion that signals are not a general construct: they were great at modeling real world state and change (including UI), but you couldn't or wouldn't want to subvert them to implement a compiler or even and presentation compiler that has to deal with a continuously changing code buffer.

FRP is great, but it is important to understand its limited scope.

### I think an underlying

I think an underlying stumbling block is algorithmic. If there are dependencies over rich data structures, how to do the incremental computation doesn't match well with current ways of automating it. Once you care -- e.g., a compiler -- it's hard. So if you have an optimized runtime (e.g., Flapjax has the DOM), we can do well, but if it's the core, using automated approaches is still very much an open problem.

### FRP is widely useful.

Do you see the approach as useful only for computer-human interaction, or would you see it as more generally applicable?

Functional reactive programming as a program subset is very widely useful, and serves as a fine basis for first-class 'plumbing' within a language, serving important roles when integrating between services. Its ability to support implicit caching, significant data parallelism, demand-driven resources, asynchronous resources, and scalable multi-cast (sharing fetches) allows it to be high-performance. FRP is extremely resilient in face of temporary disruptions and reduces temporal coupling between services compared to message-passing and channel-based communications (two services are temporally coupled if they both must be available for communications at the same time). FRP also serves as a primitive basis for rules-based programming, complex event processing, and so on.

The language design I'm laboring on includes FRP as one of the primary language layers. Below these is 'pure total functional' code. Above them is simple event processing*, procedures, actors, and configuration. Services are developed by many different organizations, and the vast majority of integration and implementation occurs in this layer or in the pure functional layer. First-class support also makes it much easier to support disruption tolerance, persistence, and migration or automated distribution.

According to Peter Van Roy, FRP has a similar expressivity to concurrent logic languages. Despite favoring a syntax based on Oz, I selected against single-assignment logic variables due to security and disruption tolerance concerns. Instead, I have FRP, and first-class support for sets (which serve as databases) and relational operations (akin to datalog). That is, you can perform 'live queries' that are accessing multiple databases concurrently.

Does the functional-reactive approach and viewpoint take the place of all other I/O models?

No.

Pure FRP is essentially a functional 'glue' language. It can be very high performance, and can usefully express mission goals and live 'how-tos' (procedures) and such. But 'pure' FRP has no side-effects. It does nothing on its own. Something else must perform the behavior, call the necessary services into action, etc.

At the very least, you have some actor subscribed to an FRP expression that takes action after a change.

But pure FRP - due to its limited expressivity - has advantages over most IO models. Most models, including message passing and concurrent constraint programming with its askers and tellers, don't hold up nearly so well under pressures of disruption, node failure, secure composition, and temporal decoupling, malicious denial of service, etc. Most models also don't support the degree of data-parallelism, automated caching, scalable multi-cast, etc. available to a decent implementation of FRP.

FRP doesn't replace all I/O models, but under the Principle of Least Expressivity, if it's supported in your language or system then you should favor it wherever it is reasonably suitable. Doing so offers potential to take advantage of these many features, and helps keep the system predictable.

On Unix, if I say something like "cat a b | sort", there is a data path crossing the "|" that the cat command thinks is output to the external world and the sort command thinks is input from its external world. Would it make good sense to rewrite cat and sort using a functional-reactive approach to what they see as I/O?

Yes, doing so would make a lot of sense.

Though making the change fit into Unix might be troublesome (Unix provides no widely accepted mechanism to describe a 'reset' of an input or output stream to describe discrete values, for example, and certainly doesn't support FRP on existing consoles...). I'll assume you're rewriting these in a system that supports FRP.

Among other things, this rewrite would decouple the observation of the 'sort' result from the exact time you decided to execute the operation. Even if you do need the 'current' value, you can still express it as an FRP expression then ask for its current value, at which point you've lost nothing.

The sort of flow programming expressed by 'cat a b | sort' is far more expressive than desirable, since it implies a potentially infinite data stream to 'sort' (which sort cannot handle), and it allows for side-effects in 'cat' or 'sort' (which means the calling language cannot perform functional optimizations, partial evaluation, caching, sharing of results, lazy evaluation, etc.).

### Indeterminism in ToonTalk

In the end I have to admit that indeterministic committed choice in ToonTalk is an "ambient authority"! I guess my claim that trucks bring this authority isn't very useful if it applies to all trucks. The tooth fairy might as well bring it, I suppose.

Moreover, split birds isn't the only place where indeterminism happens. It is possible to program a robot team to expect inputs on more than one nest in its "My Box". Since pattern matching goes from left to right, the programmer has to take care for the order of the robots and has to put pattern nests in the right places in the robots' thought bubbles. But the transitions taken by the team can depend on the outcome of a race between the two (or more) sources of objects to be deposited on the nests.

### FRP

Interesting replies, everyone; thanks. It appears this is an area one should understand before betting on tellers and askers as the key abstraction to running orthogonal persistent memory (which is the main motivation for everything I look into about programming languages).

dmbarbour (or anyone), what states should appear at the output of "cat a b"?

And any FRP enthusiasts, what if the output of the computer really is a typewriter? Maybe you don't think things should be done that way in the first place, but I'm trying to understand the applicability of FRP. Say there's a hard requirement to produce on a typewriter the output of "cat a b | grep -v '^foo$'". Is FRP, with say Haskell as the source language for writing the state machines (server side), a pretty good answer? If so, what are the states to be defined for driving the typewriter? I guess the states conform to "B | C char", where C char means char should be printed, and B means Between printing characters. So I drive the typewriter by pushing its state alternately to B and to C some-char that I want to print, and I depend on the renderer to observe each transition to C char and interpret it by printing char. Is that it? ### Cats and Carrots In unix, "cat" expresses the (immediate) state of one or more files into an octet/text flow through a pipe. File states can easily be treated as continuous, time-varying values. Thus, the FRP equivalent to "cat a b" is to produce a continuous, time-varying value equal to the concatenation of a and b. Exactly how you express this depends on the syntax... it could be as easy as 'a ++ b'. My own language doesn't automatically lift the FRP bindings, so I'd need to write something closer to {bind append@list a b} or perhaps even {bind {const append@list} a b} (to represent a constant as a data resource). My primitive syntax isn't yet concrete. The reason to deny auto-lifting is to allow one to discuss data resources independently of observing their values (which is rather nice when identifying external data references in a database). I drive the typewriter by pushing its state alternately to B and to C some-char that I want to print, and I depend on the renderer to observe each transition to C char and interpret it by printing char. You describe use of FRP to drive a typewriter by a sort of 'carrot dangling' approach. That is, the 'goal' of the typewriter - to print a character or to return to a rest state - is being dangled like a cliche in front of a pack animal. This can be done. FRP certainly can be used to describe current goal state as a function of system state (including time). But, for this case, it strikes me as odd and unlikely. To make it work reliably, the 'system state' must be accurate - i.e. a history of what has been typed thus far, or perhaps just page, line, and character numbers. Of course, one could blindly drive the typewriter as a function of time, but I doubt that would be reliable, especially if a or b also changes in the course of driving the typewriter. FRP is all about reflecting systems in the 'now'. The extra discipline to carefully maintain a history of system state starts to move you outside of common FRP territory. FRP as a complex function above a carefully maintained 'history' seems to defeat the purpose of using FRP. The 'carrot dangling' approach certainly is usable, not even so unusual. I.e. one might use it to drive an AI through an environment based on its circumstances and personal memory. It's really the discipline of creating an extra service just to explicitly maintain history of other systems that is problematic in the typewriter case. Anyhow, it may be that you're attempting to apply FRP universally when it is only applicable widely. Certainly, even if the only output device of a computer was a typewriter, FRP could still be useful. It would, however, depend upon how many input devices the computer has, and whether the computer is expected to identify and print relevant updates in realtime. FRP is a better fit for output devices only when the output has a 'continuous' nature in time, such as a monitor, LED array, animation, or speaker. Perhaps consider a computer attached to a table with those endless, slow-moving reels of paper and swinging needle (can't remember what these are called). FRP would fit much better for output in those cases. ### Haskell and Functional-Reactive Techniques From this discussion I get the impression that the Functional-Reactive technique requires its practitioner to supply code for subsystems fulfilling two roles in the technique: the front-end renderer and the back-end state machine. Do people sometimes use Haskell for both purposes? If not, would it be suitable? Even possible? ### No Cause and effect are inverted by description. You describe the effect you want, and as the cause changes, the value of the effect changes. This is very different from other synchronous reactive programming paradigms, such as those that use event-condition-action. And, again, it doesn't make much sense to say you "require" the programmer to supply code for anything. The hallmark is how you write your code and how things are wired together. Any requirements of how you have to write your code are therefore FRP design patterns / Haskell FRP idioms. (ex: How do you implement a hierarchical state machine using FRP?) Leo Meyerovich mentioned what I would call an "impedance mismatch" possibility for FRP with his mention of Flapjax relying on the DOM for performance. A similar mismatch is possible if you wanted to build an FRP layer on top of WPF's retained mode rendering system: if you're moving a visual object around, you are simply moving the object around inside the renderer's composition system. If you are writing your code with the expectation that movement corresponds to "immediate mode" drawing on the screen, then your code will be wrong. (Incidentally, WPF supports a hybrid of Immediate Mode and Direct Mode called a VisualBrush, where you use a Direct Mode "Visual"-typed object as you would a brush in Photoshop.) Bottomline: Requirements placed on the programmer by an FRP library are about integration of concerns, rather than fundamental theoretical limitations. ### Leo Meyerovich mentioned Leo Meyerovich mentioned what I would call an "impedance mismatch" possibility for FRP with his mention of Flapjax relying on the DOM for performance. Well, partially. If you want to do efficient DOM stuff in Flapjax, you pay attention to the mapping of changes to DOM manipulations. However, the semantics (except for one weird property of the DOM: a node can only be in one place) are equivalent to that of the DOM/CSS relationship. You *don't know* if immediate or retained mode rendering commands are being sent at a low level, you only know the DOM / CSS constraints are being maintained. For impedance mismatch, a question is if we can write the renderer in FRP. I can't say much more, but yes. The mismatch is more important when dealing with performance, but due mostly to parallelism, I think yes is the answer for rendering. There are still some big open challenges, unfortunately. If it's worth it -- I'm on the fence, and am currently trying to understand *how* I want rendering to work, and am not at the level of knowing the language I want for describing it. ### What purpose has a User Interface? Based on the questions you ask, I think you and I would have two very different answers to the following question: What purpose has a User Interface? I believe that the purpose of a User Interface is to provide humans an interface to services (including sensors, actuators, and human interaction), for purpose of monitoring those services, manipulating them, and the development of new services atop them. In practice, most of these services are shared and/or persist far beyond the lifetime of any given session interacting with any particular user. FRP serves very admirably for the 'monitoring' aspect. The association of widgets with behavior completes an FRP UI by supporting manipulation and development of these services. Nothing more is needed - well, except for a browser to perform the actual rendering and process local human interface devices to manipulate whichever widgets have user focus. What purpose has your proposed "back-end state machine" in the development of a User Interface? Perhaps if said state machine is performing a useful service - such as communications between users, or maintaining user-developed documents, or offering some form of entertainment to a user, or allowing secure trade of exclusive rights - its development would prove worthwhile. Certainly, there are good reasons to develop new services. But even these services you'll wish to decouple from the user interface - in terms of session, lifespan and persistence, location, shared multi-user access, and maintenance. For the vast majority of services, a UI isn't a necessary component. For example, a service for secure trade of exclusive rights could be utilized entirely without participation of humans. Similarly, the vast majority of UI sessions need not be coupled with entirely new services, much less new instances thereof. Given just three simple services in the entire world, one could presumably create UIs atop at least seven useful combinations of these services, and could likely create different UIs to expose different 'views' of these services or offer different, integrated manipulations of them. And different users might each have different views of these same three services. Toy examples seem to be manipulating your impression far more than they should. Nobody has any real use for a 'Hello, <Your Name>' service, but it is used often as an example application. This teaches bad lessons like: "it is natural for services to be tightly coupled to a UI session" or "the language for describing UIs needs also be expressive enough to describe arbitrary services". Spend some time answering to yourself: "What purpose has a User Interface?" You might gain a fresh perspective on the techniques you've been suggesting, and a fresh understanding of the techniques others promote. ### I didn't mean to "suggest" I didn't mean to "suggest" those techniques; I was just trying to write a summary of what I thought you were saying. ### What purpose has a User Interface? To provide for communication between hardware and meatware. ### And where does 'Hello World' And where does 'Hello World' fit into this purpose? ### Carlsson and Hallgren's 1998 Carlsson and Hallgren's 1998 thesis on functional window gadgets is of course not new. New to me. Thanks for sharing. ### Oracles It is funny how people sometimes have similar thoughts at the same time. I posted previously just a few minutes ago a similar topic, "Is (probabilistic) non-determinism pure?". There I favor the second point of view, "viewing the result as sets of possible outcomes". In the case of probabilistic non-determinism, this becomes "viewing the result as measure on the space of possible outcomes". I find this point of view much more satisfying than the "passing around an oracle" approach. ### measure on the space of possible outcomes If we are going to see some intermediate result as a measure on the space of possible outcomes, and if we are going to store it on say a disk (support of crash recovery being one of the reasons to do so), do we store a representation of the whole measure, or just one choice, or what? If on the other hand we have an oracle and we have used it to make a choice, we can store that intermediate result so the work that got us there won't have to be repeated, and we can understand clearly the meaning of what information we are representing on the store. edit (2010-01-05): Suppose Obua and I have a joint bank account with$100 in it and both of us want to withdraw the whole amount. We race to our respctive nearest ATMs to try the withdrawal. I have to run a little farther, but my ATM has a little better communication. Let's say I have a 60% chance of beating Obua to the money. The consequences of who wins the race are that one of us doesn't have enough money to buy so much as a bowl of rice, and faces consequently a serious risk of starvation. Is the banking and money system, after our race, going to record that there is a 60% chance that I have the $100 and Obua has nothing, and a 40% chance that he has the$100 and I have nothing? What happens when I try to buy a bowl of rice? It's sort of like Shroedinger's cat; somewhere, a decision has to be made. It has to be possible to record the outcome of the decision. I don't see how the measure on the space of possible outcomes takes this into account. The oracle-pool idea, in contrast, clearly does.

Functional-reactive widgets and forms (data-binding and transformation), with objects or procedures to receive and process user inputs, are of greater interest to me than this oracular/functional approach.

At this point, I'm not sure what the one thing (Functional-reactive widgets) has to do with the other (this oracular/functional approach).

The purpose of the oracles concept was to provide a way to specify nondeterminism[1] in a pure functional language.

I felt it might be useful to generalize from "The purpose of the oracles concept was to provide a way to specify nondeterminism in a pure functional language" to "The oracle concept may provide a good way to specify nondeterminism in an RT language", where I was thinking that the notion of RT can apply not only to a purely functional language, such as Haskell is, but also to some of the constraint languages.

In fact, where I first mention constraint languages under the topic of Oracles, is where I mention ToonTalk.

I think the comparison and contrast between the pure functional language (there might as well be just one for the purposes of discussing committed-choice nondeterminism) and constraint languages is interesting. Let me point to a couple of constraint languages and mention part of what they have in common with the pure functional language.

A common characteristic of the pure functional language and the constraint languages, and indeed all declarative languages, is that every fragment of code can be read with a mathematical interpretation. The outputs produced by executing the code conform to the result of mathematically interpreting the code as a bunch of statements that all have to be true.

In the case of the pure functional language, the statements are equations.

In Janus, the statements are not all equations. Some of them are statements of bag inclusion.

In the constraint language I sketched above, starting at my "Searching for Boundary of Statefulness Concept" comment, the statements in the mathematical interpretation of code fragments are once again equations. So I think it makes sense to call this language RT.

I'm not sure whether RT is worth more than the kind of purity that Janus provides.

One of characteristics of RT (and I feel that this is a good characteristic) is that RT makes the order of statements in a code context unimportant. In an imperative language, such as Fortran, the so-called "statements" are really commands, and their order matters very much. But in Haskell, the equations just have to be true in the context where they're executed, and it doesn't matter in what order you say true statements.

However, this order-independence of statements is also true in Janus, even though the statements aren't all equations and RT doesn't apply. For example, at some telling site in a program in Janus, we might assert (tell) that some message, M1, is a member of some bag, B1. We go on to assert that M2 belongs to B2, and that B is the baggy union of B1 with B2. Some Janus-like programming notation could abbreviate B1 and B2 out of sight by denoting both of them as ^B. We would have in our programming context, two instances of "^B". The first one would stand for B1, and the second, for B2. However, if we reversed the order and said the first instance of "^B" stood for B2, and the second for B1, this would not change the meaning of the program one iota. So, the use of the non-RT notation "^B" does not destroy the RT-like property that the order of the statements in the code does not matter. The reason I say the "^B" notation is not RT is that in one place it means B1, and in another place it means B2, and B1 is not in general equal to B2; they are two different bags. But because of Janus' "don't-care nondeterminism" (found also in the actor model), the meaning of the program does not depend on the order in which we put the bags together as a union.

I would like to identify a language that would satisfy those who value tractability and being able to reason about programs, and that would also have nondeterminism. I do not like threads as any part of how to do this. I like the fine-grained concurrency of the actor model, Janus, ToonTalk, or a constraint language that can be understood as meaning equations. I think that in an equational language, oracles are necessary to express nondeterminism. Clearly in a Janus-like language with the "^" notation I mention just above, the oracles are not necessary. But is code in this kind of language just as tractable to reason about as in the RT languages (I think I could say, equivalently, "equational languages")? Is "don't-care" just as good as RT?

Back to the role of FRP in this discussion: If FRP is supposed to help in reconciling the desire for notations that can be reasoned about to the need for nondeterminism, I don't see how.

----

[1] According to Fergus Henderson, "committed-choice nondeterminism" denotes the kind of nondeterminism I've been asking about, although I have not studied the system of terms where this term comes up.

### FRP, Oracles, and Non-determinism

Both the oracular design and the FRP designs capture non-determinism in a referentially transparent system.

The two approaches serve different roles: FRP can reflect asynchronous changes in a large number of data resources while producing an effect. The oracular design can reflect ordering of asynchronous computations within a state machine that just happens to be implemented in an referentially transparent language.

The FRP design usefully retains its referentially transparent and functional nature across compositions. For example, you can find and combine identical sub-expressions in an FRP equation, or eliminate a sub-expression that won't contribute to the result.

While the oracular discipline is also implemented in an RT language, it achieves this in a manner that renders mostly-useless its RT nature: it simply passes a different argument (a different pool of oracles) to each sub-expression. These oracles distinguish on internal events, such as whether one concurrent compute finishes before another. (External inputs don't need oracles. They can arrive on another input stream. Oracles would be used to prioritize computes on one input above another, and thus allow the order of output to be partially independent of the order of inputs.) As a consequence of oracles being sensitive to compute order, developers can't even leverage laziness in the sense of lazily computing states that aren't needed at the moment (i.e. in practice, asking for an oracle for Left or Right places a completion demand on both... a 'dumb' oracle, such as one that always returns Left, would be unable to achieve important features such as 'progress' within the oracular state machine). The advantages for which RT is oft proposed are still achieved outside the oracular discipline - in those parts of the program (hopefully a majority of it) that do not take oracles as a parameter.

Put another way: RT is useless for driving the concurrent non-deterministic state-machine from one state to another. You're not doing yourself many favors by pretending you've usefully maintained RT in the system after dumping a bunch of Oracles in to poison it.

The main advantage oracles offer above ambient authority for non-determinism is a history of non-deterministic decisions, and thus replayability, and the ability to systematically apply different orderings to significant events if looking for bugs.

You ask, "Is "don't-care" just as good as RT?"

The answer is: No. The two are good for different things.

When it comes to reasoning about a system, you need to think in terms of which useful system-wide properties a developer can correctly predict or control. RT offers a bunch of properties (ability to eliminate expressions whose results are not necessary (this makes laziness okay), parallelize expressions, combine common subexpressions, duplicate expressions (call by need is okay, too), memoize or cache expression results, guarantee secure confinement, and so on).

Task parallelism (which is the usual application of "don't care" concurrency) offers you far fewer guarantees, far less ability to control the behavior of a large program. As a consequence, developers resort to local discipline to achieve control. For example, developers of actors model, Erlang, and the like have a variety of concurrency control and integration patterns (serializers, CPS, pipelines). Where you really don't care, "don't care" works just fine. But often you do care, at which point you'll be fighting to organize the language.

A concurrent constraint language like Janus or Oz offer a decent balance, i.e. you achieve deterministic concurrency properties for useful subsets of operations by disfavoring use of 'bags'. Of course, if the ability to produce bags is an ambient authority, it is difficult to control or correctly predict determinism across compositions with third party libraries.

### Top Level of Design

Both the oracular design and the FRP designs capture non-determinism in a referentially transparent system.

1. Haskell as practiced today has an "I/O design". At a very high level of introduction to explaining what this I/O design is, we would say that the Haskell practice involves composing two components for any system to be constructed for the purpose of fulfilling an application need: a "runtime system" (fixed across applications, in terms of the behavior and semantics we would expect of it), and, for the second component, the application code expressed in the formal Haskell language proper. And we would say that since the formal Haskell language proper is RT, it does not allow I/O. But the ability of a system written this way to do I/O comes from the relationship between the two top-level components.

2. At the same very high level of introduction to explaining what the oracular design is, I would say that the oracular practice involves composing two components for any system to be constructed for the purpose of fulfilling an application need: the "runtime system" and the code expressed in an RT language. And I would go on to say that since the RT notation does not provide nondeterminism, the nondeterminism comes from the relationship between the two top-level components. The "runtime system" provides across that interface, a nondeterminism service that can then be controlled from within the RT language. This closely parallels the situation in Haskell practice, where the runtime system provides an I/O service that can be exploited from within the RT language there, that being of course Haskell proper.

3. Despite all our dialog above following on your introducing FRP and my asking what it is and your responding and my reading your responses almost as carefully as I know how, I still haven't the faintest idea how to introduce a description of any one of the FRP designs, starting at the same high level of abstraction as exemplified by (1) and (2) above.

[deleted a portion]

### Top Level IO

FRP is involved in an "I/O design". At a very high level of introduction to explaining what this I/O design is, we would say that the FRP practice involves composing three categories of components in order to fulfill a wide variety of requirements: (1) bindable data resources usually associated with hardware devices and third-party services, (2) functional reactive code to compose data from multiple resources and compute higher level information or advice, (3) and a set of observers that monitor or depend upon up-to-date information and advice to properly execute their duties. Since FRP code is RT, it does not allow explicit I/O. The ability of a system written this way to do I/O comes from the relationship between the three top-level components.

A few examples:

(A) a set of sensors on a seeking missile might report its direction, heading, position and velocity relative to target. An FRP expression might monitor all these resources concurrently continuously maintain advice for vectored thrust and angular adjustments to any foils to keep the attack on target. The mechanical system acts in accordance with the advice, thus influencing direction, heading, position, and velocity relative to target. (Implicit loop.)

(B) an FRP expression monitors a set of external queues plus a system administration policy, and offers continuous advice on whether to rest or from which queue to draw the next input. An object or actor might 'subscribe' to this FRP expression (i.e. to receive update notification shortly after any changes to the FRP behavior), and follow its advice. Following its advice would affect the queues being monitored. (Implicit loop).

(C) a browser operates as a user agent, supporting users in actively monitoring and interacting with external systems. An FRP expression gathers data from a variety of resources and functionally casts that data into something the browser can render. To help the user interact with the external systems, the FRP expression also produces advice and suggestions that the browser renders to the user as buttons or a menu and perhaps some links or navigation features. The user may then take action on this advice by pressing a button or selecting a menu option. Taking action may influence the data resources to which the FRP expression is attached, which will feed back to influence the display and options. (Implicit loop.)

### FRP Top Level

What protocols constrain the interactions among the three top-level components?

### FRP 'protocols'

Functional reactive code cannot execute a protocol, but does imply a few constraints.

Any binding to an external resource ought to be semantically free of side-effects making it suitable for caching, eliminations, subexpression sharing, parallelization, duplication, and lazy evaluation as per any pure functional code. Some side-effects, such as powering up a webcam or opening network connections to a database based on demand, are okay (indeed, end-to-end demand-driven resource acquisition is a powerful feature for FRP). These would be implemented behind the resource abstraction, outside of FRP code. To avoid the poor performance profile of polling (which suffers high latencies and high network overhead), some sort of subscription protocol is also desirable.

The observer monitoring an FRP expression may access it in at least two ways:

First, an FRP expression can be used much the same as a regular functional expression, excepting that the behavior obtained will vary based on the time at which you access it, a bit like a query to a database. Even here, FRP can be a big performance booster... e.g. if the behavior of FRP is itself a function or procedure, it could be subject to automated internal caching and JIT compilation rather than repeated re-evaluation. (FRP is a suitable basis for runtime upgrades.)

Second, you can request to get some sort of update notification shortly after the behavior of the FRP expression is suspected or known to have changed. This allows you to integrate a non-FRP systems with the monitoring features of FRP, allowing it to act in accordance with observed changes in its environment. For sake of disruption tolerance, performance, integration with continuous data sources, and integration with transactional data sources, observer code should not expect to see updates for every 'intermediate' value.

Internally, the FRP implementation will maintain this subscription from end-to-end, such that if an FRP expression needs seven resources to describe its current behavior then it will keep those seven resources up-to-date. That may be combined easily with laziness for some powerful optimizations, such that the new value of the expression isn't computed until absolutely necessary. Due to conditional expressions and the possibility for indirection (resources that identify other resources), the set of resources needed to maintain an FRP behavior may vary over time.

The observers and users of the FRP expressions have no constraints imposed by FRP. They can do whatever they want (within their capabilities) with the information they obtain via FRP. They may keep an up-to-date render on a visible browser window. They may actuate foils and vectored thrust. Etc. Most of the time, any system 'model' prediction, advice, etc. is implemented via FRP expressions.

There may be race-conditions if independent observers act to affect the conditions seen by other observers. There may be noisy feedback loops, too, if the delay between observer actions and updates to the observed data resources grow too short. In general, one must design to avoid these FRP integration pitfalls, and sticking with 'external' data resources helps quite a bit. FRP UI doesn't have feedback loop issues due to the 'user' acting as an external source of delay for initiating actions that might affect observed data resources.

A language supporting certain other paradigms (such as both pure FRP and OOP) will have support for instantiating 'new' FRP resources suitable for binding. Read-only facets of these resources could be bound into FRP expressions and exported, while other facets may be manipulated as objects. For my own language, 'cell' is such a primitive, and in an above example is used to support a new HelloWorld console service. FRP, in this manner, integrates well with the object capability security model; i.e. the FRP resources themselves are capabilities for up-to-date data. But support for FRP does not imply the support for the other paradigm, and a great many useful predictions, models, advice, views, applications, etc. may be built observing only external data resources.

### FRP Reference?

Where's a good document online where I can read more about the fundamentals of FRP?

### FRP Resources

For a brief overview of how FRP compares to other concurrent designs, read Paradigms for Dummies (section 6.1) by Peter Van Roy. I'll note that his description in this more recent document contradicts his description quoted in Wikipedia, and the Paradigms for Dummies version is consonant with my use of FRP.

Conal's page describes FRP. His views on it have developed over time, based largely on how he uses and implements it. His implementations are purer than most. Conal also discusses FRP at at stack overflow, which may serve as a better overview.

A talk by an Adobe engineer makes a motivating case for reactive UI designs.

There are a variety of visual dataflow languages related to FRP. Most of these introduce explicit delays and state, so they aren't pure, but a subset of their features would be pure FRP.

Languages and frameworks such as Flapjax, Bling, LabView, WPF, Microsoft Excel all use various degrees of FRP (rarely pure). These are easy enough to look up.

A Google search will turn up plenty of hits, if you need more.

### WPF

Although I am not an expert in the semantics of LabView and Excel, I know WPF fairly well to comment on it.

Above, Sean McDirmid mentions how WPF is a semi-example: As he puts it, "databinding in WPF is a semi-example: a databinding is declarative but installing a databinding is imperative and the property can always be assigned imperatively."

What this means is that imperative installation allows one to circumvent FRP via observing a change, and then installing a new one on top of that. In short, it is a confinement problem. Not understanding the role a "complete" FRP solution plays off in simplification is directly related to not understanding the importance of confinement. Disregarding confinement severely limits the power of FRP.

Also, much of WPF can be thought of as synchronous reactive programming with various reactive programming APIs layered on top of a basic synchronous reactive programming model - databinding. For example, in WPF 3.5 SP1, John Gossman introduced hooks into the WPF controls model to allow for a VisualStateManager attached property. This proof-of-concept demonstrated the viability of layering programming language theory ideas on top of more basic constructs; VisualStateManager, as the name implies, is a direct realization of very old ideas in computer science that state machines can dramatically simplify the specification of reactive components. Here, the components are 'controls'.

The issue with 'controls' is that they often require extensive implementation knowledge in order to compose them effectively. This is not the case with Bling, which has a much more distinct FRP flavor. The downside to Bling is that it does not give you as rich of a design-time design model as does a formally specified state machine.

Thus, Bling and VisualStateManager can very much work to compliment each other: Bling is exceptionally good for working with the canvas control, and VisualStateManager is exceptionally good for working with the controls model. In fact, you can mix-in Bling with VisualStateManager, and vice versa, especially with the support of the Expression Blend Behaviors; this contains the extent of Bling's effects to the property the Behavior is targeting.

From this, we can gather that some data flow engine responsible for propagating changes is a fairly fundamental abstraction, on top of which we can place many other paradigm variants; we can even build asynchronous constraint solving on top of a synchronous reactive system. -- I would say this is very much in the spirit of CTM.

Finally, VisualStateManager is in my experience way easier to 'style' and 'theme' an application for accessiblity needs, although this has more to do with a 'metamodeling' trick than a weakness of FRP. The reason why VisualStateManager is a huge win is that it innately supports the Ultimate Hook pattern via the inheritance context and change propagation networks rules local to each DependencyProperty. Conal's work on FRP is 'not there yet' at this point in addressing such 'patterns'-based concerns, such as Deferred Event pattern and Ultimate Hook pattern. I have seen many Haskell programmers complain about this, mostly in the surface-level understanding of why the "do" notation is lacking usability.

Just 2 cents.

### Conal's blog

I should also mention Conal has been doing a good job subjecting his ideas to the scientific method, thinking out loud on his blog how he can attack the weaknesses of his current FRP state-of-the-art.

Lately he has been on a kick about discussing the von Neumann model.

### The issue with 'controls' is

The issue with 'controls' is that they often require extensive implementation knowledge in order to compose them effectively. This is not the case with Bling, which has a much more distinct FRP flavor. The downside to Bling is that it does not give you as rich of a design-time design model as does a formally specified state machine.

Although Bling now has its own state machine construct, which is lower level than visual state manager, but is useful in implementing things like click/hold/drag adapters.

Thus, Bling and VisualStateManager can very much work to compliment each other: Bling is exceptionally good for working with the canvas control, and VisualStateManager is exceptionally good for working with the controls model. In fact, you can mix-in Bling with VisualStateManager, and vice versa, especially with the support of the Expression Blend Behaviors; this contains the extent of Bling's effects to the property the Behavior is targeting.

If you look at the way Blend is evolving, its going in a separate directly from coded WPF. Blend is more interested in declarative features that can be applied and configured within a design tool, hence they came up with behaviors and the visual state manager. Blend's influence is more from graphic and video composting tools, and FRP has little exposure.

Blend behaviors are more similar to declarative inheritance in SuperGlue. I want to swing my work from Bling back to a declarative language so I can look at these features in a more appropriate language (declarative toolable language vs. C#, but with stronger semantics than XAML).

### If you look at the way Blend

If you look at the way Blend is evolving, its going in a separate directly from coded WPF.

...this has always been the case. Blend's auto-generated code contains 'secret knowledge' about how WPF works internally; Blend will generate code that you would not intuitively write yourself if you were to mimic the same drawing goals using procedural code.

Blend's influence is more from graphic and video composting tools, and FRP has little exposure.

John Gossman's background is at least in part as an experienced CAD tools craftsman, and Blend is a visual tool for designers, so that makes sense. Across the software industry, we're seeing this sort of shakeup; Andy Gavin's Flektor startup company was bought out by MySpace. Flektor applied Gavin's considerable experience building game editors/level editors at Naughty Dog Software (where he created GOAL) to Flash development and the Web. Actually, Flektor had to scale back its video editing tools to make them more accessible to average users who just wanted to make fun mashups and edits.

FRP has little exposure primarily because of the way people build applications, and also because several other important problems must be solved before it can be mainstream.

### Is WPF really synchronous?

Is WPF really synchronous? If a reaction to one event imperatively invokes a setter elsewhere, will reactions to that set be delayed?

### I believe there are no

I believe there are no nested reactions. The reaction to the set would be pushed onto the event queue. Also, I don't think WPF avoids glitches, actually I'm quite sure of that as I've observed some in the past. But as WPF focuses on UI, it doesn't really matter. (Don't get me started on initialization concerns, which can be tricky in WPF).

### What is the defining

What is the defining characteristic of "async reactive" vs. "sync reactive"?

### Sync reactive means that the

Sync reactive means that the reaction occurs in the instance that the change occurs vs. sometime after the change (async). Sync reactive is often glitch free whereas async reactive is fairly glitchy, in that you are often working with new and old values in a reaction; eventually, your reaction sees all new values, unless their is a feedback loop...

Leo can probably provide a better answer than me. I'm in the camp that we should tolerate glitches (SuperGlue tolerates glitches), but systems like FrTime and Flapjax avoid glitches and so code doesn't have to tolerate them. I believe the classic formulation of FRP is meant to be glitch free (synchronous).

### I'm most familiar with the

I'm most familiar with the FrTime implementation, so I suppose I'm in the synchronous camp. FrTime sorts dependent expressions by their height in order to avoid glitches.

So is it fair to say that an async implementation would simply skip this sort and thus process updates in any order? Thus, there would be some redundant computation where a glitch occurred in order to compute with the properly updated value.

Edit: you said sync is "often" glitch-free. So glitch-free is not the defining characteristic of sync vs. async. Is there a paper about an async FRP implementation that might help? Something like the FrTime papers where sync FRP is explained very well.

### Sorry, by often I probably

Sorry, by often I probably meant always, but I'm uncertain. Synchronous means that reactions appear to take "no time," or at least everything that needs to happen, happens on the same clock.

I'm very much an advocate of the asynchronous approach, which basically involves pushing reaction into a work list and processing the work items in an unspecified order. Redundant computation is the consequence of this approach, but having dependencies sort themselves out on their own is an incredibly robust implementation strategy. I've used asynch in SuperGlue and in the Scala IDE, where it would otherwise have been impossible to figure out the "right" order to process a bunch of dirty Scala syntax trees.

Also, asynchronous is inherently more scalable than synchronous because you could actually divide up your event handling on different nodes without requiring any kind of central synchronization. Perhaps you could check out Flask and Regiment, which are FRP-like languages for programming sensor networks, though I'm not sure whether this matters for Regiment.

### Glitch freedom is about not

Glitch freedom is about not repeatedly/prematurely evaluating the same expression (e.g., a+b only runs once), and I view this as being crucial to systems with effects and event streams, and only a performance consideration for ones based on signals. As a toy example, given some expression 'foo/(-time + time + 1)', in a glitch-free system, will never get NaN.

Synchrony goes hand-in-hand, though, at a formal level, it is orthogonal. If we're in dave barbour's world (as per the rest of this thread), and view FRP as interacting with an external stateful env, it means that outputs emitted due to an incoming event do not impact current computations. Either we can trust the system to not not prematurely send new stuff in (so a good WPF user can write synchronous code), or the system can enforce it (queue outgoing messages and do not send them until everything in the current timestep is done). Lookup the 'synchrony hypothesis': I find it to be very important.

I'm not sure I buy Sean's performance argument; that's how a GPU works and I think it can give you other wins. The performance hit is probably the incrementalism (essentially message passing) and maybe glitch-freedom (though I haven't looked it up -- if you want theoretically optimal incrementalism, you'll likely be glitch-free anyways -- see Tom Rep's thesis ... I'm planning to play around with this in my parallel browser's layout engine over the summer as there have never been good empirical results here outside of Kim Burchett's lowering/dipping paper).

Hopefully that was clear...

### The GPU is probably a bad

The GPU is probably a bad example. On the GPU, all state propagation is explicit, and you get performance by having all new state generated according to old state, no intermediate updates are visible! I'm not sure whether you would call that synchronous or asynchronous, but in any case the order of dependency processing is irrelevant (within a render call) and fixed (between render calls).

### Synchrony hypothesis and state machine patterns

In Miro Samek's pattern language for hierarchical state machines, all events can be treated as untrusted. Although Miro does not frame the problem as a trustworthy computing problem, he frames it in terms of queuing systems theory. He argues that the implementation of a state machine can allow for the implementor to not realize the design is illogical in that it allows for corrupting the current event by allowing a new event to enter into the same state-space as the current event. He goes so far as to say that in poor state machines, this is the most common implementation flaw he sees. For example, Deferred Event pattern implemented w/ an HSM uses the hierarchical state machine to partition the statespace between a Busy state with nested substates and an mutually exclusive Wait state. In standard Object-Oriented Model-Driven Architecture, you would instead partition the application into a supersystem and subsystem, where the supersystem only dispatches a new event to the subsystem when the subsystem signals "Done!". [Edit: The difference is that in standard OO MDA, new events are not even seen by nested substates (the subsystem), and this allows more modular programming at the expense of forcing a subsystem to more tightly define its event lifecycle. In HSM theory, you instead control the sequencing of events by partitioning the state-space; therefore, you have to care about history and can not think about state interactions in terms of composable Input/Output Automata sequences, because every transition must consider its' parent context.]

Synchrony Hypothesis: The Synchronous Approach to Reactive and Real-Time Systems

Tom Rep's thesis ??? Are you refering to Tom Reps at UWisconsin? What does his thesis Generating Language-Based Environments have to do with glitch freedom? While Reps pioneered attribute grammars to bootstrap IDEs, and his pioneering work with Teitelbaum is now being revisited by IBM's Safari project for Eclipse, I don't understand what it has to do with web browsers. I am unaware of achievements Teitelbaum and Reps made in glitch-free graphics!

Kim Burchett's lowering/dipping paper

### The theoretical underpining

The theoretical underpining of his thesis was about evaluation of attribute grammars, with the most developed stuff about incrementality. Restricting attribute grammars to DAGs, a common assumption in that space, is analogous to the restriction on FRP to no feedback loops. It's hard to infer (but of course easy to verify) this property in attribute grammars but is true by construction in FRP.

Topological evaluation to prevent reevaluation has been known at least since before Reps, as he mentions it in some of his papers. He doesn't view it as a semantic glitch but a performance one (he works in effect-free systems; that wasn't true of all of his peers, who made complicated do/undo mini-transactions that, obviously, have failed over the passage of time).

It's not obvious to me why Reps' theoretical optimality work is relevant given that topological evaluation gives you the minimal amount of incrementality, and the rest of the cost is scheduling, which the structure of attribute grammars enables you to make O(1) (over winter, I was playing with more general forms, but didn't come to a nice formulation). But that's a topic for another day.

### Very interesting

I am amazed by whoever came up with that connection! I only know of Reps' work from hearing of it on Jonathan Edwards' blog about OOPSLA 2006; I bought and read his thesis because it is relevant to my literate programming manifesto.

There are more ideas you might be able to usurp a model structure from.

Have you ever looked into commonsense reasoning? My conversation with Bertrand in his story today made me think of it as a possible idea! In order for commmonsense reasoning to scale, it must deal with problems that may have similar characteristics to FRP: coalescing events to prevent spurious effects. I would recommend Erik Mueller's book Commonsense Reasoning as the definitive source to learn the subject. I've also coincidentally mentioned this book on Edwards' blog in the past, where I commented on declarative vs. imperative boundaries in modeling, and the need for serializability. See section 4.2: Preventing Repeated Triggering

### locking?

in other words, one needs a good locking strategy? stm?

### RE: Locking Strategy and STM

Something like STM is quite appropriate. However, FRP makes this easy: there are no effects 'internal' to the FRP expression, so all such transactions would be read-only. This corresponds to simply keeping a local cache of the version for each data resource accessed during an update cycle. (A topological evaluation strategy is oft used to establish this cache.)

Locking strategies might work on a local scale, but don't allow secure scalable composition. "Time-steps" tend to imply a particular implementation strategy... not a bad one for local computation, but certainly not one that effectively distributes or handles disruption or resists denial-of-service. (I tend to frame everything in terms of distributed computing, but many security properties are also useful as 'safety' properties even for local work.)

When integrating with real-time systems, I would generally suggest any real-time concerns - i.e. any hook into the clock - be shifted outside of the FRP expression (i.e. lift the FRP expression such that the result of the expression is lambda(Time).Behavior or the like). This tends to result in more stable behaviors and smoother animations, which can reduce bandwidth and re-compile costs and such. A renderer would be able to take lambda(Time).Scene` and interpolate precisely and continuously, properly handles changes to Scene caused by changes in external resources, and is capable of projecting anticipated behaviors in case of temporary disrupted connection to external resources.

FRP to manipulate locally computed physics (constraints, step rules) also seems like an interesting hybrid for UI... I'm still mulling over that one.

### I'm very much an advocate of

I'm very much an advocate of the asynchronous approach, which basically involves pushing reaction into a work list and processing the work items in an unspecified order.

I think I understand then. I've also built a prototype using this "eventual consistency" approach, where reactions are driven by an event-loop and any changes in turn N are propagated in turn N+1. The system eventually settles unless there is a cyclic dependency.

I'm not convinced this approach is better, although it did easily support the notion of simultaneous updates, eg. submitting an HTML form where multiple fields were changed since the last request.

I recently went back to hacking on the synchronous version to support the same notion there. In either approach, it simply requires a context-local event-loop to split value update and update propagation into separate turns.

### This is just an example of

This is just an example of the age old debate between well-defined reductive centralized determinism vs. fuzzy emergent de-centralized non-determinism. Either style has benefits and disadvantages, although depending on your brain-type, you might prefer one style over the other.

### My concern is two-fold:

My concern is two-fold: users and developers. Users start to rely upon weird scheduler order semantics and then developers get pushback if they want to change it. Do it right in the first place.

Max/MSP is one of my favorite examples of this: scheduling is based on geometry, something like if output wires are hands on a clock, events go clock-wise, depth-first. This makes parallelization hard (they still haven't managed it, for an otherwise obvious data parallel environment that sorely needs it) and confuses most users.

Once effects enter the picture.. yikes. It's either assume the worst happens (asynchronous approach) or run-once (topological/synchronous/transactional etc.). If you take a retrospective look on attribute grammar research and what stuck around (e.g., yacc), especially in the case of effects, people seem to prefer run-once semantics. In the effect-free caes, it's just a question of performance (and incrementalism generally loses!).

### My concern is two-fold:

My concern is two-fold: users and developers. Users start to rely upon weird scheduler order semantics and then developers get pushback if they want to change it. Do it right in the first place.

First, users shouldn't depend on weird scheduler semantics. Developers have to over-engineer to take into account random behavior from their environment, and must test for that behavior. When you build something bottom-up style, your bottom parts are designed to be very defensive about the environment they might be placed in.

I haven't used Max/MSP, but it sounds like it still has to contend with a common clock and signal connections. This still requires globally planning, so it makes sense that you couldn't parallelize it very easily.

Run once is of course impossible if your data changes, it must run again anyways each time the data changes, and the nature of that change may be unpredictable. Take a modern OO language like Scala, the dependencies between parse trees are complex determined not only by position in the parse tree but type dependencies that can span multiple definitions and files. There is obviously no simple graph solution for this, and you've got to just be really fuzzy about your dependency tracking, and run once isn't really possible. As for side effects, say entering a symbol into a symbol table or completing a type (if lazy type completion), you definitely have to deal with that by somehow reverting the state when a run occurs again, but its definitely possible (though I got stuck on some hack for Scala case classes/objects in the end...).

Anyways, if you are limiting your toolbox to deterministic run-once solutions, then I can tell you are limiting your problem space to where those solutions make sense.

### This is such a tricky question

The answer is yes and no.

Yes, it has the basic constructs for synchronicity. No, all the APIs layered on top allow for asynchronous updating.

I believe an earlier version of WPF was designed around the idea of abandoning the COM Single-Threaded Apartment limitation and allowing multi-threaded UI layer. In a single-threaded model, enforcing synchronicity all the way down could result in hanging pauses... especially due to how "open" the look&feel of controls are, and how they can be recursively composed. Despite this, it is still possible to partition your application so that the bulk of the logic is synchronous reactive. For performance reasons on current architectures and the lack of good CLR primitives, you won't likely be doing this.

...One of these days we'll have a multi-threaded UI toolkit from Microsoft. Academia only invented one, oh, 20 years ago, called Trestle, for Modula-3:

Trestle Reference Manual
Mark S. Manasse and Greg Nelson
DEC SRC Research Report 68, December 1991
http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-68.pdf

and

Trestle Tutorial
Mark S. Manasse and Greg Nelson
DEC SRC Research Report 69, May 1, 1992

Apart from that, it is again misleading to characterize WPF's basic paradigm as FRP - for one, development of WPF began about 2 years after the first paper on the subject, not enough time for the WPF team to be proselytized to and accept the religion. Second point would be that WPF DependencyProperty objects allow callbacks to optionally coerce values into particular states (defining PropertyMetadata is allowed at installment time only, because it is so insane); this is a direct violation of the FRP model. It is also obviously not modular coercion.

### On synchronous vs

Is WPF really synchronous? If a reaction to one event imperatively invokes a setter elsewhere, will reactions to that set be delayed?

On synchronous vs asynchronous: the above implies that invoking a setter in a reaction should delay that set in order to qualify as synchronous. Is the "setter" in this statement the input to another signal? If so, what is the rationale behind this requirement?

I can understand that if the setter is a signal involved in the current reaction, ie. a cyclic dependency, that the subsequent reaction should be delayed until the current reaction completes. I'm not sure I understand why the reaction of *another* signal should be delayed, as surely invoking the setter implies that the other signal is subordinate to this one. From a transactional viewpoint, that reaction is a nested transaction within the current transaction.

Or is the "setter" some side-effecting operation? So the requirement is that the side-effect is induced only once at the end of the reaction.

### I/O Automata

Maybe of interest to readers (from a design and analysis viewpoint) is Nancy Lynch's (et al.) work on Hybrid I/O Automata and related work on Timed, Dynamic, Probabalistic I/O Automata available on her publications page, which also includes some example uses. An interesting application is (unfortunately behind a paywall) Modeling and simulation of cardiac tissue using hybrid I/O automata.

### []

[withdrawn due to lack of progress]