Opa

Opa is a new member in the family of languages aiming to make web programming transparent by automatically generating client-side Javascript and handling communication and session control. Opa is written in OCaml. A hierarchical database and web server are integrated with the language. The distribution model is based on a notion of a session, a construct roughly comparable to process definitions in the join-calculus or to concurrent objects in a number of formalisms.

A good place to start is here. And here you can find several example programs with accompanying source code.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

On it

I'm currently working on a paper explaining the session model. To summarize, sessions are indeed roughly comparable to join definitions, with the considerable difference that a session can be either on a server, or on a client, or shared between several servers.

Opa is interesting. But

Opa is interesting. But the licensing (Affero GPL) is a severe turn-off, especially since it includes the standard user-space libraries rather than just the compiler tools.

I might peek under the hood and see how the cloud distribution code is implemented.

MLSTate is looking for a

MLSTate is looking for a buying: upfront big investment, no bizness found. They cannot put their only asset under a more liberal license I think.

Their choice, of course. But

Their choice, of course. But I'll never again use a language whose standard libraries are GPL'd. Got burned by that enough with Maude.

_

_

Maude standard library code

Maude standard library code is GPL. I was not able to release a model, in university, because it included both proprietary concepts and GPL'd standard library code. I ended up switching to a different modeling software.

_

_

I am quite sure there is a

I am quite sure there is a difference between developing a concept 'in' university, vs. 'for' university. My research was not publicly funded.

But, either way, it is quite irrelevant to the licensing issues.

No, we're not

I thought I would leave the comment above as it is: False, totally misinformed, and definitely not worth an answer.

But as we receive emails from people who mention they read it was our "strategy", please let me tell you that it's not a startup business model to get acquired.

However, we're happy to sell professional support for Opa, training and expertise through code reviews, project management and coaching.

Under the hood

Feel free to do so. Any feedback is welcome.

In my opinion, the most interesting cloud-related open-sourced parts are

  • HLnet, an implementation of the pi-calculus, which serves as our low-level layer for everything network-related;
  • the implementation of distributed sessions (on top of HLnet).

Thanks for your

Thanks for your feedback.
We're aware that many persons don't like the AfferoGPL. In case, what precisely turns you off? What would be acceptable?
We'll try to discuss the subject with our community soon.

License feedback

Make an exception for the '.opa' files under the stdlib directory. Even GNU makes an exception for its header libraries, and every file comes with an extra line:

// Under Section 7 of GPL version 3, you are granted additional
// permissions described in the GCC Runtime Library Exception, version
// 3.1, as published by the Free Software Foundation.

The problem with putting GPL on stdlib files without an exceptions is that it means every library or project that uses stdlib MUST also be GPL or AGPL. A more liberal or commercial license for a developer's personal work becomes infeasible.

I fully understand keeping the opa compilers under GPL. But I would avoid pushing a freedom agenda so hard it virally infects the user's code!

_

_

I welcome freedom

But I think much more can be done for freedom by systematically motivating it - i.e. by making it a path of least resistance and greatest reward. We don't get freedom by attaching GPL like a ball-and-chain to our code.

Open-source is insufficient to encourage collaboration in any case. It is still too difficult technically to discover relevant code, refactor, and reuse code between projects. This is, I think, a language-design problem.

Thanks for your feedback. We

Thanks for your feedback.

We are strongly looking at a license change and hope that we will be able to motivate the Opa community to contribute. Stay tuned for the announce!

Agreed.

Agreed.

why not?

I think its sort of a good idea to have an GPLed language. The GPL is designed to spread open source. At the time of the LGPL projects like GCC wanted to become standards. Now most languages are open source. I'm not sure I see a problem with going to the next level if one believes in the whole FSF philosophy. This seems like a consistent step in the right direction.

GCC output isn't viral

GCC would have been irrelevant if it's binary output was virally GPL too.

begging the question

I think you are begging the question. My whole point was that the next step isn't a total break. You are assuming in your rebuttal it is.

GPL and Open Source

The GPL is designed to spread open source.

No it's not.

from the preamble

From the preamble to the GPL, "To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights."

please somebody who groks it

compare/contrast if only briefly with things like:
* hop.
* the microsoft thing that purported to let you change the slider of what happened on the client vs. server (can't find link easily right now).
* ur/web.
* hilda.
* et. al.

GWT? Links?

GWT? Links?

Volta?

* the microsoft thing that purported to let you change the slider of what happened on the client vs. server (can't find link easily right now).

Volta?

I think that one of the main differences is that Volta was killed years ago.

Brief survey

I will try a brief survey. However, if you are more knowledgeable than me about one of the projects I mention, please feel free to correct me.

Erlang
Opa and Erlang offer similar models of concurrency and distribution. Erlang is interpreted, dynamically typed, handles failures gracefully, does not know anything about the specificities of the web, in particular security, while Opa is compiled, statically typed, can only handle a few failures gracefully, but works quite nicely with the constraints of the web, and provides high security guarantees. Another difference, of course, is that Erlang is 20 years old, while Opa is quite young.
GWT
GWT itself essentially covers the client-side. Vaadin, which is an extension of GWT to the server, seems closer to Opa. Also, GWT is based on Java, whereas Opa is functional+agent programming.
Hilda
I am not too familiar with Hilda. However, from what I understand, the language is very much data-driven, while the core of Opa is much more concurrency-oriented.
Hop
Hop is a dialect of Lisp, interpreted, highly-dynamic. IIRC, the project does not care much for security, concurrency, distribution. Opa is compiled, with static type inference, guarantees numerous security properties, plus offers a nice paradigm for concurrency, distribution.
Links
Links and Opa are based on quite similar ideas. Similar concurrency model, similar type system, both implemented in OCaml. The main differences appear in the standard library (considerably larger in Opa), in the database (relational in Links, graph-based in Opa), and in the effort towards distribution (non-existent in Links). Also, I am under the impression that the Links project is currently on hold.
Microsoft Volta
IIRC, Volta is dead since 2008. From my recollection of the Volta paper, they had huge security concerns and no real idea on how to solve them. Opa solves the problems they mention. I'm afraid I do not remember much more about Volta.
ML5
The ML5 compiler has a very nice client/server distribution scheme that is quite similar to that of Opa, possibly more powerful at the expense of being less automated and harder to learn. Beyond this, it is hard to judge, as IIRC, ML5 is currently at the state of a (very promising) prototype, whereas Opa has been used to develop commercial applications.
Ur/web
Ur/web is focused largely on meta-programming and functional reactive programming for concurrency and user interface manipulation. By opposition, Opa does not really care about meta-programming, and is more focused on Join-calculus-style concurrency and distribution. Also, the storage mechanism of Ur/web is based on PostgreSQL, whereas Opa's storage mechanism is immediate storage of data structures, with maximal sharing.
WebSharper
WebSharper and Opa are based on rather similar ideas. However, WebSharper uses F#'s type system, whereas Opa uses a structural type system, client-server distribution does not appear to be automated in WebSharper by opposition to Opa. I have no idea what the concurrency model of WebSharper is, or even if there is concurrency or server-side distribution in WebSharper.

Edit (reorganized by alphabetical order, added Erlang)

Another point from which to

Another point from which to triangulate Opa seems to me to be erlang. It also is inherently based on a distributed/concurrent model, and comes with a rich framework of components. In contrast with Opa, it is not primarily web oriented.

Done

(added to the list)

FRP vs Pi

Can you say some more about how the FRP model compares to the Pi-/Join-calculus model you've taken? I'm fairly familiar with the FRP work (though I haven't followed the latest developments) and it seems like a good fit for the web.

I'm reading about Opa now. One thing I super don't like is the closed system model. You're trying to replace the entire web app stack. People build companies around just one layer of the stack, and I don't believe you can have done a world-class job on every part. I'd rather use existing tools for the DB and web service layer, and keep Opa for rich UIs, which is where I think it will shine.

Good question

I'm not sure I can deliver a decisive answer to that one, so please, anybody, do not hesitate to rescue me.

Pi/Join calculus are very much based on notions of communications: (lightweight) processes communicate with each other through (typed) channels, can spawn additional processes, etc. In the world or programming languages, this is comparable to Erlang, with more room for static typing. Although we have not pushed this to the limits with the current version of Opa, this maps nicely to both highly dynamic UI, client-server communications, server-to-server communication, database storage, etc.

FRP, by opposition, is largely about describing results. Maybe this is just because I am quite rusty on FRP, but I do not see how this would map as nicely with the above mentioned concepts.

Btw, while attempting to put together an answer, I stumbled upon a Stackoverflow thread.

Thanks

We've decided to learn some Opa at our Friday training session:

http://news.ycombinator.com/item?id=2949026

Update: I should add that we're experimenting with holding an open session on G+. If you're interested do join us! See the link for details.

Re: FRP vs. Pi

FRP is even more a 'closed system'. It is quite difficult to model a dynamic set of clients and a server as a single FRP behavior. But I agree that the basic ideas of FRP - of presenting applications that are functions of current and past inputs - is suitable to distributed systems programming. (So I'm developing an open-systems variation on FRP, called RDP.)

Pi-calculus and join calculus are much more suitable for open systems: you can share a references, and communicate on them directly. Some people don't like the 'spooky action at a distance', though.

That said, pi-calculus and join calculus are NOT suitable for open, distributed systems. It is too easy to lose a message, or for events (possibly from different broadcast systems) to be observed in different orders causing divergence, or for someone to maliciously deny a message as a denial-of-service technique.

Opa mitigates a lot of these issues by modeling a closed distributed system - i.e. we don't have separate 'opa' services.

FRP is ambiguous

One problem is that FRP is not very well defined, whereas the Pi-calculus is very precisely defined. I'm most familiar with the FrTime / Flapjax FRP variants, and I think they could handle dynamic sets of clients. The Haskell based approaches (Arrows etc.) I'm less sure about. As I recall they're like monads that once you but data in (to a monad or arrow) you can't get it out again. This makes it difficult to interface with other systems. :)

FRP does seem higher level than the pi-calculus.

Father Time

After researching the history of FRP, I don't consider FrTime and Flapjax to be proper 'FRP'. More like 'FRP-inspired'. OTOH, I also don't consider ML to be 'functional programming', since I tend to favor strong definitions of 'functional': function is mathematical relation from domain to range; if side-effects are allowed, you're programming with first-class procedures, not functions.

I grant this still leaves FRP somewhat ill-defined (no fixed formal model), but it does at least provide me a few objective measuring sticks: FRP's concurrency is deterministic and temporal, FRP cannot generate outputs except through an interpreter of a result signal. And from these points, I can justify various useful conclusions. For example, FRP can model dynamic inputs (sources) much more effectively than build and compose ad-hoc outputs (sinks). 'Effects plumbing' in FRP very easily becomes tedious and annoying. This in turn means it is difficult to effectively control the input signals (e.g. difficult to disable a video signal whenever nobody is consuming it, so we're likely to just leave it running and filter instead). Thus, FRP is rife with effects and control issues.

On that note, I found an excellent little rant regarding the failure of FRP in the UI domain thus far.

There are, naturally, plenty of models that are similar to FRP. Synchronous reactive (as embodied in Lustre, Lucid), for example, is perhaps a decade older than FRP, and does allow side-effects. Rather than trying to welcome a model with side-effects under the umbrella of 'FRP', I would favor finding more suitable names. My own work on RDP is certainly FRP-inspired but aims to rectify these concerns regarding effects plumbing.

In any case, FRP is not 'higher level' than pi-calculus, in the sense that the abstractions of FRP are neither a superset nor a subset of those in pi-calculus.

Early FrTime work was

Early FrTime work was phrased as embedding FRP (or higher-order dynamic data flow) into a call-by-value language. That's pretty precise, reusing the terminology of Ptolemy and mainstream PL. I view such an embedding target as an *extension* of pure functional languages; the research contribution was extending the reach of FRP. In the absence of impurity, the model reduces to the traditional approach. Disciplined programming is possible, and as we found in practice, so are work-arounds that can be encapsulated.

As for the failure rant, Flapjax addressed most of the concerns. E.g., using it with jQuery widgets. Likewise, while I'm less familiar with the day-to-day of FrTime, the integration into the main Racket UI libraries was a significant effort. I think the real failure, based on observing actual use by non-PL-weenies, is that higher-order reasoning, as presented in FRP and related systems, is too far from what people currently practice (which may be a social issue).

I don't really understand what it means to base a programming model on the pi-calculus. That's like basing the language on the lambda calculus. So it has message channels (functions) -- so what? In practice, I suspect a lot more is meant, but that varies by domain and person, so it's a useless term for GPLs. (Not to say it isn't useful for theoretical work on analyzing constructs in isolation).

Extend vs. Stretch

I feel that an 'extension' of a model should not weaken its theory. OTOH, I might agree that adding impurity is 'stretching' the model, in the same sense as stretching a metaphor.

Flapjax offers a weaker theory/model than FRP. This isn't a bad thing for Flapjax - the greater flexibility allows Flapjax can address some legitimate concerns. But we cannot say that issues are solved in what Elliott or Hudak (the men who invented the term) would call FRP on the basis that they were solved for Flapjax, because Flapjax is not quite FRP.

I don't really understand what it means to base a programming model on the pi-calculus. That's like basing the language on the lambda calculus. So it has message channels (functions) -- so what?

'Based on Pi-calculus' connotes use of first-class message channels, and synchronization based on waiting for specific message channels. 'Based on lambda calculus' connotes use of pointed substitution-based evaluation, and first-class functions.

Sure, those are sort of 'ballpark-sized' annotations on a programming model. But I think getting into the right ballpark is not so useless for someone who knows a wide variety of programming models (generative grammars, temporal logic, concurrent constraint, term rewriting, neural networks, petri-nets, tuple spaces, communicating sequential processes, point-free SKI combinators, etc.)

With 'based on', of course, you don't really know much. The model could be weakened beyond much recognition. Someone could be playing buzzword bingo. I'd take it with same salt I use to swallow 'based on a true story'. (Sure! It was truly a story!)

Putting it differently, a

Putting it differently, a distributed model with first-class message channels is saying almost nothing about what makes distributed programming interesting/hard. Blocking on message reads is nice, but, compared to the rest of the baggage, of little significance wrt distribution.

I actually view message sends the imperative equivalent of stream processing.

Why should someone say 'what

Why should someone say 'what makes distributed programming interesting or hard'? Putting it differently: why should distributed programming be interesting or hard? The goal of a good distributed programming language is exactly the opposite condition: to make distribution boring and easy. Technology should only be exciting when it's new to us.

Pi-calculus is simply one mediocre attempt to make distribution easy and boring.

Blocking on message reads is nice, but, compared to the rest of the baggage, of little significance wrt distribution.

Blocking on message reads is not nice at all - it's an easy vector for deadlock, and denial of service. It seems you're asking how properties of pi-calculus 'help', which is a very relative question. I find far more useful a question of where a model goes wrong - where 'go wrong' is synonymous with things becoming 'interesting or hard'.

I would not endorse pi-calculus for distribution. It remains 'interesting and hard' to handle node failures in a sane, consistent manner. And the path-of-least-resistance for pi-calculus still has a high level of arrival-order indeterminism, which means it is 'interesting and hard' to reason about behavior of distributed programs.

Blocking on reads can be

Blocking on reads can be fine, e.g., in a multithreaded/actor context as in erlang or any language with continuations. I agree that there is an interesting question of whether it leads to a dangerous coding style (e.g., a library call can block you). Again, this is why I prefer FRP to untangle the control (even if it can be direct).

root failure

the real failure, based on observing actual use by non-PL-weenies, is that higher-order reasoning, as presented in FRP and related systems, is too far from what people currently practice

Meh. People will practice whatever is on the 'path of least resistance'. For Flapjax, this means embedding effects, which is much easier than the effects-plumbing demanded by FRP.

They won't even be aware that they sacrifice some forms of higher-order reasoning. By the time they want such reasoning, it won't be very feasible, and certainly won't be the path of least resistance anymore. The emergent property is that the developers never experience opportunity or motive to learn the higher order reasoning; they'd need to great foresight and discipline to use it at all.

The root failure is that the constraints required for FRP's higher-order conflict with the design desiderata of simple effects plumbing.

If this conflict did not exist, the designers of the Flapjax language would feel no need to compromise FRP. And, since the model is not compromised, users of of Flapjax would gradually learn higher-order reasoning as their projects scale - because such reasoning would gradually become the path of least resistance.

So, aspects inherent to FRP are the problem. Stretching FRP with embedded or hybrid effects from some other model is a rather ad-hoc, haphazard approach to developing a better model.

Why is it controversial or

Why is it controversial or surprising that higher-order reasoning might be intrinsically "hard" and costly to use in a mental process? Alternate theories arise that developers are either lazy or just uneducated , that somehow if they just tried it they would love it and magically their code would become better?

I find that it is more difficult to reason about control flow when we go higher order, possibly even harder than the imperative (also indirect) code we are trying to avoid. You are very much doing things indirectly: pass this function into that function that will then call it to...I prefer language design that is about figuring out how to avoid being higher order and still expressive; e.g., code should look like "sort(list, ascending)" rather than "sort(list, (x,y) => x - y)".

And let's not confuse cause and effect: that some great programmers excel at higher-order programming does not mean mastering higher-order programming will make you a great programmer. It could very well be that their brains are wired differently, that they are good at higher-order reasoning because they are special. Its always great that these people are around, but the majority still needs to get work done.

Higher order reasoning is

Higher order reasoning is not intrinsically 'hard' or 'costly'. In general, higher order reasoning is both simpler and easier than reasoning at the low level. For example, humans do not understand traffic in terms of individual cars, nor baseball in terms of biological impulses.

Our brains are well adapted to learning and applying higher level strategies and patterns, when they appear valid.

The gotcha is the 'when they appear valid' bit. Higher order reasoning is difficult in most programming languages because it is clearly not valid. Developers cannot reason about programs in most languages from a macro-view - the programming models often force developers to poke down into the nitty details of specific programs to understand the in-the-large behaviors.

I don't blame the developers. Laziness on their part is a good thing. I blame the language designers, who lack the foresight to do some extra design work up front so they can be even more lazy later on.

I find that it is more difficult to reason about control flow when we go higher order

Then perhaps 'control flow' is the wrong answer.

I prefer language design that is about figuring out how to avoid being higher order and still expressive

I am quite certain there is a significant difference between 'reasoning about higher-order programs' and 'higher-order reasoning about programs'. I'm talking about the latter.

Who Are You Calling a Weenie?

No doubt Flapjax requires some mental rewiring to use, but that's not the only story. Some issues I recall:

  • The examples were all about behaviours (continuous time signals) like the mouse position. Web developers don't care about this stuff. They are interested in events (discrete time signals), doing stuff like submitting content on user actions.
  • The API was confusing, with far too much duplication.
  • The source was virtually unmaintainable by outsiders.

That said, I think Flapjax is great. It's worth noting that current practice (backbone.js, batman.js) is slowly getting to where Flapjax is. It's also worth noting where these event-based frameworks differ in the features they provide.

I did a small case study and

I did a small case study and found that signals only really win when you have a lot of continuous behavior to tunnel through. If you are just manipulating slot of discrete events then you get some interesting combinators and that's about it (like Rx).

Hi Noel :) Weenie in the

Hi Noel :)

Weenie in the affectionate sense of appreciating the potential to the extent of accommodating the rough stuff, both in theory, and as you noted, in practice :) Actually, Arjun has done a magnificent job of scrapping most of the v1 code. Likewise, I bet the LunaScript project is also up there now (and especially exciting in terms of server support), and I was recently pointed to Facebook's Javelin library (though admit to not playing with it yet!).

Your comment about events is important: Erik Meijer took a similar a notion in RxLINQ, abandoning behaviors entirely. The code style that emerged for us was to take in events and munge into behaviors as fast as possible. Pervasively messing with events can get messy -- e.g., all the clock calculus work. Converting to a behavior is a hint that the event notion is no longer relevant. For the most part, we can just think about behaviors as atomic values, and only exploiting the behavioral part when its useful.

Outside of the higher-order issue, a bigger struggle for me on practical event abstractions was not figuring out a strong enough animation abstraction. E.g., to do Powerpoint-style composable animations, Flash's timeline, or automated walk throughs. I had several attempts, but nothing quite right in practice.

The comment about higher-order reasoning is something I've struggled with. Is it inherent, e.g., are imperative coding practices are formally equivalent (for some meaningful notion of equivalent)? If so, then whether it is a question of pedagogy or cognitive load is extra interesting! If not, then these abstractions still may be missing something relevant to the domain. My observations were from trying it out in a UI research lab, with commercial users, students, and trying out challenge problems on my own -- I don't think the question is unique to FRP. For anyone going down this rabbit hole, we found very few cases having to use "switch," the main combinator for "going back down" in order, especially after the basic libraries are supported, and even less so if a compiler is used (but that has its own issues).

Mental rewiring

FRP is result or effect oriented whereas event callbacks are cause oriented. They both have their areas where one is more natural than the other.

As an example take an email client with a menu on the left (compose new message, inbox, sent mail, etc) and a message list / message view pane on the right. With event handlers the specification goes something like this:

On clicking on "compose new message", display a new message editor on the right pane.

On clicking on a folder, display a list of messages in the folder on the right pane.

On clicking on one of the messages in the list, mark the message as read and display the message on the right pane.

With FRP the thinking is inverted. Instead of thinking about each cause and specifying what its result is, you think about each result and specify all the causes that affect it:

The content of the right pane is a time varying value being the merge of:
- the clicks on the "compose new message" button mapped to the new message editor
- the clicks on each folder mapped to the message list of that folder
- the clicks on the messages displayed in the right pane mapped to a display of the message

The "read" property of a message is false until the message is displayed.

The first kind of thinking is much more natural for most programmers. The event handlers specification also often results in a more efficient implementation. Take for example the specification of the property that indicates whether a message has been read. We might implement the part "the message is displayed" as a time varying boolean for each message X as "the content of the right pane is currently displaying message X". The problem with this is that if there are n messages, behind the scenes this would be implemented as n event handlers listening to the contents of the right pane, that get called every time the contents of the right pane changes. To prevent this you'll have to do tricky signal rewiring to make sure that only the messages that it was possible to navigate to are listening.

How can we integrate or unify the two different approaches so that we can choose the appropriate one for each part of the program? Can you write a scheduling algorithm for event handlers that prevents glitches?

Infokinesis

How can we integrate or unify the two different approaches so that we can choose the appropriate one for each part of the program?

My Reactive Demand Programming model achieves this. Each behavior is bi-directional: you push a demand and pull a response. Demand and response are eventless, time-varying signals. Demand can have certain side-effects, such as causing records to display in a pane, or manipulating zoom and focus of a video camera, or publishing a time-varying element to a shared set. Or it might just represent a time-varying query.

Since we can easily represent continuous, time-varying queries, we can easily 'pull' info from various resources, combine their values (data fusion), then 'push' commands to other resources. Similarly, we can publish to shared sets so that other behaviors can query them. RDP was designed with these gather-scatter and publish-subscribe behavior in mind, in particular for diverse data models in command-and-control systems.

The result is that developers flip at will between push and pull, scatter and gather, publish and subscribe. A major reason for pushing is 'open' composition - extensibility and efficiency. The right window pane in your example only needs to know about the messages pushed to it, and doesn't need any direct relationship to their source. When developers click on a message, there might be capabilities built right into the published message to delete or manipulate it.

Can you write a scheduling algorithm for event handlers that prevents glitches?

I control glitches by use of temporal semantics.

My vats are allowed to drift in logical time (for greater parallelism) but obey a drift law: (forall A,B ∈ Vat . time A < time B + drift B). The drift law allows me to achieve determinism between vats by use of explicit delay. (Data-parallelism can augment this further, e.g. spark a computation you'll need in a few logical milliseconds.)

However, developers may choose to forego glitch-freedom in cases where latency is more important. (RDP also supports negative-latency operations by use of anticipation.) To accommodate this, RDP is also very resilient and resistant to glitches: eventual consistency provides a safety net, sateful data models can often be retroactively corrected up to a few milliseconds, and vats provide snapshot consistency.

Snapshot consistency between vats is weaker than glitch freedom because paths through different vats to observe a resource may be inconsistent (i.e. Alice's view of Charlie might be different than Alice's view of Bob's view of Charlie). However, snapshot consistency should thoroughly eliminate most problematic glitches. Snapshot consistency is also feasible in the distributed case, by batching a bunch of updates followed by an 'apply' command.

(Drift law is not feasible in distributed case. So instead, I plan to logically disrupt a connection that has fallen behind a negotiated drift. This policy supports real-time, bounded-space, denial-of-service resistant connections.)

Too vague to infer practical implications

How would you do the email client in RDP in pseudo code?

Sketch e-mail example

Following your format above, I'd do something like this:

  • A tuple-space like state, called S, keeps a set of records based on previous user inputs. Demands on S can effect an insertion or deletion of a record.
  • When clicking "compose new message", a record for an editable blank message is inserted to S. (No effect if said record exists.)
  • When clicking a folder, we insert into S a record indicating the folder clicked and delete from S all other records indicating clicked folders.
  • When clicking on a listed message, we insert into S a record indicating we wish to view the full message.
  • A behavior queries S for 'editing' messages, and publishes appropriate commands to the GUI service to display an edit window. Edits are effected by deleting the old tuple and inserting a new one.
  • A behavior queries S for folders clicked, takes this value to query a backing database of summary messages, then publishes a list of such messages to the appropriate GUI pane.
  • A behavior queries S for message to display, queries the database for the full message, then publishes the message to the GUI service for display. When the GUI's response acknowledges valid display status, we issue another demand to the backing database to mark the message as 'read' (if it hasn't been already).

Or in short: user-inputs affect a shared data model, and display is a RESTful function of gathered data.

Note that I use 'when clicking' three times. RDP is eventless. A click is modeled as a continuous button state (e.g. varying between down and up), and the 'down' state must hold for some non-zero time for it to count at all. The effects during click are similarly continuous, e.g. we are continuously 'inserting' a message into the tuple space (even if only for a picosecond). The state is necessary because we want to remember the insertion after we stop clicking.

Also, I used 'a behavior' three times. RDP is a very declarative model, in the sense that we can literally declare a set of nearly independent 'behaviors' that interact continuously in a shared environment (i.e. based on shared references to the tuple space, GUI service, and backing database). Other models with similar properties are temporal logic programming, synchronous reactive programming, discrete event calculus, temporal concurrent constraint programming, and the subsumption model.

[edited for clarity]

Thanks, that sheds a lot of

Thanks, that sheds a lot of light on your approach. How would you implement stateful behaviors, like the tuple space? What are your primitive language constructs?

What exactly is a demand? A piece of arbitrary data inserted into some set of demands of a behavior? Can you model each behavior as an actor that receives messages placeDemand(x) and removeDemand(x)?

Are behaviors first class? That is, can you place a behavior as a demand on another behavior?

How does pulling a response work? Can you model it in a kind of continuation passing style? I.e. can you instead of pulling on X, place a demand on X that contains another behavior Y, and then X puts the demand containing the result on Y?

Litany of RDP questions

You ask fair questions.

Q: How would you implement stateful behaviors, like the tuple space?

Reactive Demand Programming (RDP) is designed to orchestrate stateful behaviors (and sensors, actuators, foreign services), not to implement them. So access to state is treated like a foreign service or FFI, but with a few standard models provided by the language runtime.

Tuple spaces would either be a standard service, or modeled atop some other standard. A candidate primitive for implementing a tuple-space is a term-rewrite model where 'demands' describe a set of rewrite rules and the state is a term. The term would effectively be rewritten every instant.

I haven't written the blog article on state spaces yet, but I discuss elements of how access to a state service would be provided in an article on demand monitors, and I discuss several candidate state models in an article on stability.

Q: What are your primitive language constructs?

RDP is a programming model, not a language. But for the model primitives:

Signals with clear activity cycles (i.e. explicit starting and stopping times) serve as the values of RDP. Signals describe values that vary over time. Some signals may describe continuous values (as in: continuous function, like ((T - (2011 Sep 9)(*sine(T*pi)), while other signals may be piecewise discrete (i.e. 0 for T<(2011 Sep 9), 1 after). Activity cycles are critical for modeling disruption or non-existence of signals.

Note: you cannot model 'instantaneous' events with RDP signals. Every signal must have an activity cycle with positive duration, even if only for a picosecond. An instantaneous signal simply didn't happen, as far as the optimizers and RDP behaviors are concerned.

RDP requires a global time model. My current implementation uses a simplified UTC with picosecond precision.

RDP can support many pure-functional operations on signals - e.g. you can add two real-valued signals together to generate a new real-valued signal. The set of available functions depends more on the programming language. I can imagine languages that enforce real-time properties, i.e. by avoiding recursive functions.

RDP is structured and constrained in such a way that you are guaranteed two signals being added will have the same activity cycle. Thus, the question never arises how we should add a real-value to an inactive signal. This is achieved by a 'duration coupling' property (duration of response equals duration of demand) and required use of a synchronization primitive (synch :: (Signal s) => (s x) :& (s y) ~!> (s (x,y))) before we can operate on the (x,y) pair using a function.

A consequence of duration coupling is that time cannot be squished or stretched in the model, though shifting (delay) is allowed, as is anticipation.

RDP's main primitives are its composition operators, which are based on Arrows. My model of arrows is a variation to support asynchronous signals and enforce correct use of 'synch', based on Adam Megacz's generalized arrows.

A good RDP programming language should not need to expose the arrows model to users, who should be able to treat behaviors much like functions.

Access to sensors, state, actuators, and foreign services is not primitive to RDP any more than OpenGL is primitive to Procedural. However, a programming system must ultimately provide such access in order for an RDP program to actually do something useful.

Signals in RDP must generally be updated over time. E.g. a sensor must provide updates even if the demand doesn't change. The simplest update model is a switching signal - at time T, switch to new signal S'. However, more advanced update models are feasible, e.g. it is feasible to 'patch' a large database.

Q: What exactly is a demand? A piece of arbitrary data inserted into some set of demands of a behavior?

A demand is a signal. I call a signal a 'demand' when it is the input to a behavior. A response is also a signal, which happens to be the output from a behavior.

Q: Can you model each behavior as an actor that receives messages placeDemand(x) and removeDemand(x)?

No. This approach misses a few concerns such as security, concurrency, consistency, and provision of the response. For example, it would be a problem if Alice can placeDemand(x) then Bob can remove it!

For modeling RDP atop message-passing, I favor a more connection-oriented approach. In an imperative model, this looks something like:

updLnk <- mkLink behavior responseHandler;
updLnk (t0,s0)
 // ... later ...
updLnk (tk,sk)
 // ... disconnect ...
updLnk (tf,never)

Each link is good for one demand. To make multiple demands, you would build multiple links. The 'removeDemand(x)' is subsumed by simply setting the demand to a permanently inactive signal (never). Consistency is achieved because we indicate precisely when each update occurs in logical time. (By idiomatic use of 'delay', we can ensure these times are far enough the future that we do not experience glitches.)

A single 'updLnk' operation carries a full signal update. Thus, it can carry a lot of information about the future. For example, we could model the path of a baseball as a signal, and we'd only need one update (or perhaps a couple more to refine the prediction) rather than a bunch of periodic updates.

The 'mkLink' operation is composable, in the sense that the 'updLnk' value could be a response handler to another 'mkLink' operation. It helps that the initial demand is inactive, such that we can build a complex behavior by use of 'mkLink' then activate it all at once.

However, this 'implementation-layer' composition is tedious, difficult to optimize, and strongly discouraged. The most likely place where developers would need to muck with signals in this fashion is adapting those external sensors, state, actuators, and foriegn service models.

Developers of an RDP application are expected, instead, to compose a single RDP behavior that describes the entire application. The application is then activated as a whole.

Q: Are behaviors first class? That is, can you place a behavior as a demand on another behavior?

Dynamic behaviors are supported by RDP, and essential to many RDP design patterns (including service discovery, brokering, staged programming). The whole RDP application is also considered a dynamic behavior.

There are some usage constraints. You cannot store a dynamic behavior in state (logically, dynamic behaviors are continuously revoked) and there are limits on delay if you need the response from the behavior.

Q: How does pulling a response work? Can you model it in a kind of continuation passing style? I.e. can you instead of pulling on X, place a demand on X that contains another behavior Y, and then X puts the demand containing the result on Y?

For the implementation, I provide a response handler when building the link. This might be considered a sort of staged continuation passing style.

In general, use of dynamic behaviors to model responses in RDP would be a bad idea. First, RDP's safety properties require a response to every demand with a statically computed latency, so we would need a variation of linear typing to enforce the requisite discipline. Second, optimizations based on RDP's spatial idempotence, such as multi-cast and proxy caching, are undermined when every behavior comes with a unique explicit callback. Third, keeping response implicit and universal better supports functional composition.

As is, I prefer to keep response a primitive aspect of RDP behaviors. However, this is something I thought a lot about after first conceiving of RDP.

Blackboard architecture?

I may not be fully comprehending your sketch, but it sounds a bit like a blackboard architecture. Would that be a reasonable comparison?

I think the real failure,

I think the real failure, based on observing actual use by non-PL-weenies, is that higher-order reasoning, as presented in FRP and related systems, is too far from what people currently practice (which may be a social issue).

This really would be a fatal objection to FRP: the attractiveness of FRP comes precisely from the possibility of taking deeply higher-order programs (ie, callbacks in an event loop) and writing them in first-order style (stream transducers). If that's not possible, then so much the worse for FRP!

Luckily, I don't think this is true. If you give up lifting, it's possible to define calculi which look like naive functional programming on streams (including higher-order and first-class streams), but which actually only permit causal, memory-bounded stream programs.

If you're coming to ICFP, I'll have a demo. :)

And for the unwashed masses...

...or at least those not making it to ICFP, please do share it with us here when you can! I know posting your own work is discouraged, but I'm quite positive that someone else would be glad to do the honors.

Are you going to attend

Are you going to attend splash this year?

Don't think so

I wasn't planning on it. But now that you mention it, maybe I still could...

No...

...unfortunately. I had been hoping to talk with you before I left MSR, but what I was doing didn't reach a sufficient level of maturity until very recently, just as my postdoc is ending. I do hope to write a submission to PLDI on this stuff, which will be a case study of a graphics editor or something else a step beyond the usual tiny demos -- maybe a port of JHotDraw, to make the SE crowd groan with horror/fatigue. :)

PLDI/ECOOP are in Beijing

PLDI/ECOOP are in Beijing for 2012, so if you go, definitely look me up!

Google App Engine?

App Engine targets only the server-side, but there it has the interesting property that applications are automatically "Google-scalable".

Ur/Web and concurrency

I'll just point out that Ur/Web has interesting features for message-passing concurrency, too. I don't know much about "join calculus," so I can't propose an example illustrating a practical consequence of the different design decisions here in Ur/Web vs. OPA.

Also, I think "distribution" beyond "one server and many clients" is irrelevant for 99% of web applications, so that it doesn't much matter what "support" a programming environment has for such things. (Support for one or more hot-standby copies of server state seems much more compelling to me for most applications.)

99% of web applications

"distribution" beyond "one server and many clients" is irrelevant for 99% of web applications, so that it doesn't much matter what "support" a programming environment has for such things

In 1886, people might look at the Hall-Heroult electrolytic process and say something similar: Aluminum is irrelevant for 99% of material requirements today, so it doesn't much matter what 'support' we provide for mass producing it.

Problem with that sort of argument is that 'requirements' are themselves developed within the constraints of the tools we have available, and thus visible demand is a function of cost. 'Opportunity cost' is rarely visible and thus difficult to account.

I'll just note that your

I'll just note that your comment provides no argument that multiple physical servers are part of the most cost-effective strategy for more than 1% of web applications today. What you say just as effectively "debunks" a claim that there's no reason to worry about how well programs run on computers built from networks of monkey brains. ;)

Measuring cost-effectiveness

Measuring cost-effectiveness fairly is somewhat difficult, since one gets to cherry-pick which traits he is looking for, and it is still heavily influenced by tooling. For what % of web applications would formal methods serve as a most cost effective strategy today?

There are many classes of web-app that benefit from decentralization and distribution - such as wikis, search engines, bulletin boards, mailing lists, telepresence, forums, DVCS. How cost-effective is it to use a central server in these cases? How do we measure the 'cost' per a minute downtime or failure?

With regards to running programs in the brains of monkeys, who knows... maybe that could work out, if we had the right tooling and wanted more creativity from our servers. Would it be cost-effective compared to neural networks or generative grammars? Though, it does sound morally reprehensible.

_

_

Neat

I look forward to hearing more as it becomes available. The fact that Gilad Bracha is involved gives me a lot of hope.

I don't believe Gilad is

I don't believe Gilad is involved in Dart, he is just a speaker at the same conference.

[Edit] Gilad is at Google now? Totally missed that. Nevermind, something is up.

You can read a discussion of

You can read a discussion of this here (where you can even up or down vote individual comments, yay).