Gamma formalism

We remember all too well a recent discussion on LtU in which the OP complained about PLs to be too low level. If I understood correctly the authors goal was a language which is essentially data-driven and has no notion of control flow. The control-flow should somehow emerge from the "need" of data.

After a while this idea started to work in my mind and I remembered reading some papers in the late '90s about the "chemical reaction metaphor" in computing which has some resemblence. I'm not sure if the work of Banatre et al about the Gamma formalism was ever mentioned on LtU so I refer to some papers about Gamma and related topics I found on the Web.

The chemical reaction metaphor
Structured Gamma
Autonomic Computing
Higher Order Chemisty
Parallel Computing
AspectGamma

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Another reference to related work ...

is Berry and Boudol's Chemical Abstract Machine.

Oh bother: that link is available only to ACM people; this retrospective paper (downloads a PS file) is from Boudol's page at Inria (where other relevant links might be as well).

Dataflow Oriented Programming

Kay Schluehr: "If I understood correctly the authors goal was a language which is essentially data-driven and has no notion of control flow."

This sounds a lot like the language I am starting on, Pipeline. A program in Pipeline is basically just a collection of nodes. Their outputs just feed into the input queues of other nodes. The only control flow (if you even want to call it that) is multiplexers and demultiplexers. I know that doesn't sound right upon first glance, but it really does give you the control you need. Conditionals are pretty easy. Loops are weird, but macros can hide that away well enough. Hopefully I can put some more information on my Pipeline blog and get a decent interpreter going, then post it here for praise/critique/discussion/etc. There is not much to show or explain yet as I only just started the blog less than 24 hours ago.

My point here is that I think data-oriented programming can and should be explored further than it has been so far. It has potential, and can rid us of a lot of common headaches in imperative and functional languages. Seriously, reading about so many problems with existing languages here after thinking hard about a dataflow oriented language really shows it to me. Most problems with type, concurrency, state changes, etc. seem to either become simple or do not apply in a dataflow oriented language.

This is my first post here, but I will be sticking around. I will be starting a new topic about this specifically at some time in the future. I am not ready yet.

sounds similar to SuperGlue

I designed a language based on connections, objects, and FRP signals (not streams) as part of my dissertation. Here is a paper presented at this year's ECOOP about it.

Neat. This reminds me,

Neat. This reminds me, somewhat, of Cocoa bindings. Useful stuff, always.

However, it doesn't seem anywhere near the same as what I am trying to do with Pipeline. ... I'll just have to make a more complete specification of the semantics and bring it up later.

[Edit] Has anybody noticed how the MVC paradigm always seems to work more smoothly when each component is implemented in its own DSL? SQL databases and adapters tend to work perfectly for models, SuperGlue/bindings for controllers, and nibs/templates for views? (I'm bringing up random examples from Java/SuperGlue, Objective-C/Cocoa, and Ruby/Rails environments here.)

DSLs

Has anybody noticed how the MVC paradigm always seems to work more smoothly when each component is implemented in its own DSL? SQL databases and adapters tend to work perfectly for models, SuperGlue/bindings for controllers, and nibs/templates for views?

That is because programming languages force multiple values into an array or a stream, while DSLs often abstract multiple values away so you can treat them as one value. You don't need a loop in SQL or XPath. I'm toying with that idea in my toy programming language Moiell, so that you can f.e. write quicksort like this:

?QSort :: ?first, ?rest => 
  QSort(rest < first)
  first
  QSort(rest >= first)

Just like most DSLs, programming this way tends to let you program closer to normal speach, i.e. QSort(rest < first) for "quicksort the rest smaller than the first".

Pipelines

Pipelines can also play a mayor role in mainstream PLs where one might create architectural design pattern using them. My prime example is C#. Here a Connect attribute is defined which is parametrized with "in" and "out" and which can be annotated to properties:

[Connect("in")]
bool hasUserPin
{
    get{ return _PIN }
    set{ _PIN = value}
}

This property is defined in an arbitrary class A that is derived from the base Connectable. If an instance of A is created the reflective framework ( via a construtor call of Connectable ) seeks for a property with the same name and type as hasUserPin but equipped with a [Connect("out")] attribute and sets the value of the in-connected property with that of the out-connected property.
The process requires a living object b of a class B that defines the out-connected property. If not yet available it might just be created; some method defined by Connectable e.g. execute() will be applied and the value of hasUserPin in B will finally be feeded into the in-connected hasUserPin property of A. The classes A and B don't need to know each other i.e. A does not need to hold an instance of B or vice versa. It is only the reflective infrastruture that couples A and B but it also doesn't know about A and B explicitely.

Similar to Seam?

Judging by description, the "Connectable" approach is similar to Seam bijection.
And, of course, to Qt's Signals and Slots.

This

This paper of mine, to be presented in August, explains a design of a small embedded language, hosted in OmniMark. You might find it interesting. The language is data-driven and contains conditionals and various loop-building combinators. Since the venue is Extreme Markup 2006, the primitive components shown in the paper are oriented towards XML processing, but the same combinators (i.e., language constructs) could be used in many other domains.

I'm thinking about reimplementing the same thing in Haskell and applying it to the shell-scripting domain.

If I understood correctly

If I understood correctly the authors goal was a language which is essentially data-driven and has no notion of control flow. The control-flow should somehow emerge from the "need" of data.

The "need for data" idea reminds me of goal oriented computing. Based on the goal the program finds what it needs through rules and facts. If we think in terms of backtracking there is also a "pipeline" interpretation; data flows from the deepest nodes down to the goal.

Using rules and facts has another use, It provides a built in semantics.

Edit: An interesting perspective on this can be found in the history of Planner etc. Notice the evolution from backward chaining to the Actor model. In broad terms I see this as the more familiar forward/backward chaining debate. Both points of view provide part of the answer. We have modern versions of the Actor model. What would happen if we brought backtracking into the 21'st century.