SuperGlue

Hi, I'd like to announce our paper "SuperGlue: Component Programming with Object-Oriented Signals," which will be presented at ECOOP next month. Its related to functional-reactive programming with a more declarative and object-oriented way of manipulating signals.

Abstract:

Abstract. The assembly of components that can handle continuously changing data results in programs that are more interactive. Unfortunately, the code that glues together such components is often difficult to write because it is exposed to many complicated event-handling details. This paper introduces the SuperGlue language where components are assembled by connecting their signals, which declaratively represent state as time-varying values. To support the construction of interactive programs that require an unbounded number of signal connections, signals in SuperGlue are scaled with object-oriented abstractions. With Super-Glue’s combination of signals and objects, programmers can build large interactive programs with substantially less glue code when compared to conventional approaches. For example, the SuperGlue implementation of an email client is around half the size of an equivalent Java implementation.

Paper available here here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Functional Reactive Programming in Oz

PvR has been bandying about how to do Functional Reactive Programming in Oz:

I just learned about an interesting programming paradigm called Functional Reactive Programming. This is like functional programming except that each function can accept a stream of input values. If a new input value arrives on an argument, then the function is reevaluated with all the most recent input values and a new output value is created. This is a kind of dataflow programming. It can handle nondeterminism, so it is more expressive than declarative concurrency. In fact, it can be implemented with the declarative concurrent model extended with WaitTwo. (This means that it is similar in expressiveness to concurrent logic programming!)
Updated versions of the code can be found here and here.

computation spaces?

PvR: "We use computation spaces to wait until all threads suspend simultaneously."

I'm not familiar with Oz, so I don't know about computation spaces in oz. Does this scheme work if the upstream threads not strictly a hierarchy (they cross branch with other computations), or if the current thread is connected with others in a recursive loop?

Threads in computation spaces

I don't completely understand your question. A computation space contains an arbitrary set of threads, which are situated inside the space. The threads just execute concurrently in the usual fashion. There is no hierarchy between the threads that influences this. The space has a global synchronization operation that can wait until all the threads have finished their work.

There are rules that handle how information is communicated into a space and from a space. These rules are designed to be useful for constraint programming. One rule is that a space cannot bind a dataflow variable that is declared outside of the space. It can only read that variable. So I define two Oz ports outside of the space. The first port's stream is read inside the space, so sending to the first port will send information into the space. The second port's stream is read outside the space. So inside the space, sending to the second port will send information out of the space.

threads in computation spaces

..but inside the space there are possibly redundant calculations being performed? If one has a static acyclic graph, then nodes can be scheduled with priorities assigned according to the index in a topological sort so that no redundant calculations are ever performed. I'm trying to understand to what extent computation spaces are a solution for preventing redundant calculations in a dynamic graph. It seems that within the space there can be redundant calculations, but the space prevents redundant firings from being observed leaving the space. Is that correct?

I have a real time system where any subset of a set of multiple input ports may fire simultaneously and I only want to compute the subgraph that depends on all of the ports that actually fired at the same time. I only know how to do this with a global topological sort. No local firing rules I know of work for all cases. Topological sorts are expensive to do in real time and so this encourages static graphs. I have used hierarchical graphs where subgraphs are spawned within nodes of a parent graph in order to be able to have some dynamic graph topology.

Scheduling constraints and computation spaces

inside the space there are possibly redundant calculations being performed?

Yes. Threads inside the space are scheduled according to the usual preemptive round-robin scheduler.

the space prevents redundant firings from being observed leaving the space

Yes. The space makes it possible to put a global scheduling constraint on all the threads inside it: wait until all threads are blocked. This will keep redundant firings from being observed.

No local firing rules I know of work for all cases.

Yes, that is what our empirical investigations showed too. The space adds a global firing rule.

Topological sorts are expensive to do in real time

I agree. I think that the solution using computation spaces is quite efficient in many cases: the amount of redundant computation is small and bounded by a constant factor. In particular, the whole subgraph is not recomputed when a new event arrives, but only the part with changed values.

The space's scheduling constraint is implemented efficiently. I think that the design rule "no premature optimization" applies here. I would not do a topological sort until I had proof that my simpler solution was too expensive.

and across the network?

is your implementation easy to pull outside a single vm? could it be expanded so that vms across a network can react to each other? can you add caching so that events are grouped together before transmission (so, perhaps, a "composite" event is generated only at certain intervals, or when the incoming rate drops)? what other kinds of "load-balancing" might be possible?

i think this is very interesting and useful - it just cropped up in a discussion today; i would like (only in my dreams - there's no plan to actually do this) to provide a "spreadsheet like" interface to distributed data processing and this is the kind of foundation that needs. apologies that i've only briefly skimmed the paper so far - perhaps the questions above are addressed.

Components can be

Components can be implemented with Java code, and so its possible for signals to be implemented in anyway thats convenient (e.g., with sockets). The biggest constraint is that state changes must be pushable. Caching components can be interposed in programs like registers can be interposed between components. This capability leads to interesting issues with glitches, which our implementation deals with.