Context Driven Scheme Objects

I have been quite intrigued by Jim Coplien and father of MVC, Trygve Reenskaug's architecture called DCI. Data Context Interaction.

I made up some conceptual code for you to feast your eyes on (in Scheme). Maybe this makes sense to you too? I find it to work quite well thinking about real world situations. Thus translating my mental model into code is done easier. There is much more to it from a technical point of view, however, I want you to grasp the structure and semantics of the code by just reading the source. So please go ahead and review :-)

A frogs world: http://pastebin.com/m3f71adbe

DCI reference: http://www.artima.com/articles/dci_visionP.html

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Frogs

If you want to improve your frog model, you might want to look at the classic article "What the Frog's Eye Tells the Frog's Brain". Two of the authors, McCulloch and Pitts, are among the founders of neural networks and connectionism generally.

thanks. But I really don't

thanks. But I really don't want to improve upon it.. I arbitrary chose the frog. It could really be any other animal for all I care. I'm not sure if the concept of mine is getting through to you :P

The code I posted is crap.

The code I posted is crap. In a technical sense at least..
What needs to be solved is how to propagate the context as the frog object is moved around in the world. Suggestions?

I can certainly provide

I can certainly provide suggestions. First, if you like Trygve's ideas (which aside from that bad article you linked, his ideas are very edge and still some of the most cutting-edge in OO theory, even after 15 years), then check out his book on OOram. A lot of his points on interaction, though, are solved by UML-based toolkits today. The major problems are: (a) efficient implementation for systems that don't naturally map at the machine-level to OO problem solving style [there are many where a hierarchy of descriptions or other approach to describing the problem domain are more natural for compilers to consume] (b) lack of education, it will take at least two more generations of programmers before these tools become mainstream

The big problem, though, is you a modeling a frog in isolation and not considering the problem domain. Roles only get you so far, although their incorporation into the object language's typesystem via traits is a very clever way to implement the GoF State Pattern. In fact, it probably represents the best PLT-approach to doing so safely today.

More precise feedback

Here is where you, and Trygve along with Jim Coplien (in the article you linked) mess up:

;; we issue a movement command on the object '#01'
(message-to-object #01
(execute-trait (swim 3))

You are creating controller DoIt() messages, mapping messages directly to procedures. Here, it is a DoSwimming() message. This basically implies the frog is not an autonomous entity. This is directly caused by the fact you are modeling the frog in isolation. Now, because traits are fairly cool, you can factor your code such that the message you are sending is actually one about the frog's context in the world. So, you tell the world something just happened, by announcing an event to the frog, and the frog says "given this information, now is a good time to swim". This simple model of reality allows you to build fairly robust simulations. You actually don't even need to junk the swim method. You just remove hardwiring sequences of collaboration by removing a direct-link from the top-level interpreter to the frog object. Thus you can compose more robust interactions by allowing each object (or group of objects in a composition) to account for their own state machine. This makes what will happen next in a system very simple, and the interesting behavior derives from the complex interaction of these very simple state transitions.

Very insightful

Eight bits forming a hug from me to you. Seriously, thanks a bunch :)

A paper request.

Maybe you could write a paper on this, elaborating more on this common misunderstanding?

"It's not my job"

There is a real simple explanation for why people misunderstand things: alternatives that look familiar.

In OOPLs, people familiar with Structured Design and Structured Programming took advantage of the block-structured semantics of OOPLs to do structured programming in Java.

A = B.doProcessingOfTypeA()

This creates contextual call-site relationships between A and B, increasing the likelihood A will depend directly upon B's implementation. Methodologically, it also means that code is a direct implementation of a specific situation as opposed to a whole problem space.

Such constructs are really only put there to give you assembly language constructs, not to compose systems. When you are always doing

A = B.doProcessingOfTypeA()

then you will naturally gravitate towards Presentation-Abstraction-Control style architectures and the Frame-Oriented Programming paradigm. This is because the only way you can re-use A is to directly compute B.doProcessingOfTypeA(). This has a very direct effect of producing the "spaghetti code" phenomenon. That's literally "the structured programming recipe for spaghetti".

Most early languages made it really easy to do this, so people did it. While doing so, they also spent little time on design, and also wrote no tests, so that their designs were ultimately hard to safely refactor as they learned more about the problem domain. On top of this, languages like Java made it really easy to use bad OO constructs like exceptions for design (checked exceptions promote exceptions for critical failure to a general purpose construct of exceptions for all error handling). This is mainly because these languages lacked in some way, encouraging using a feature for the wrong purpose. For example, Java lacked things such as fuzz testing tools, and design-by-contract. Fast forward to today, and whole platforms like .NET 4 will have concolic testing backed by contracts. This encourages writing systems that are easy to test and "change without changing" or "change, with minimal change". Designers of complex systems do not like huge changes.

So what you are seeing is object-oriented approach growing up, thanks to a better ecosystem, stuff like model checking is being used more and more, and people have the right tools and the right know-how to catch design errors earlier and earlier, driving down the cost of software.

Now, as for a guess as to why people read about the DCI architecture, or even MVC, and think "Great! Let me code that up" and write what you coded up in your example, well it's as simple as people not wanting to change. They'd rather write 20 more lines of code, create massive redundancy, than do a simple model-driven approach. The key is as simple as focusing on responsibilities and collaborations. This requires knowing what problem you want to solve, asking questions, etc.

Does it seem to others that...

Does it seem to others that DCI is just another approach to try to attack the expression problem?

As Alexey Radul stated in

As Alexey Radul stated in his Phd. thesis:


In this dissertation I propose a shift in the foundations of computation. Modern
programming systems are not expressive enough.
The traditional image of a single
computer that has global effects on a large memory is too restrictive. The propa-
gation paradigm replaces this with computing by networks of local, independent,
stateless machines interconnected with stateful storage cells.

ref. http://web.mit.edu/~axch/www/phd-thesis.pdf

What computer science has always been about, is Data -> "the what it is", also called domain description.
And second, algorithms, "how to do things". When do these two kids meet in real life? That is called a context(!) And it is what makes up every programming paradigm, or every software architecture I think. They all try to figure when they meet, and how they meet. There are a lot of buzzwording about this, but at the end of the day, they are all trying to express *that* infamous silver bullet.. So I think everyone is trying to *express* their mental models using their own bits and go-abouts, whether that be: FRP, Propogation or DCI. However, I think we are close to nailing the third pillar in Computer Science, the context pillar.

That's a neat paper (but

That's a neat paper (but very long)

The only downside I see to the thesis is it's lack of rigor in disproving just about anything. It's jovially optimistic. Why no comparison to data flow, etc.

As for "the context pillar", I think there have been many approaches to solving this problem that just look at basic theory of computation. OO Analysis depends on the notion of finite state automata or petri nets to model systems.

The contribution Alexey seems to be offering is how to modularly do things like plug-in a truth maintenance system. It's a pretty neat methodology. It's also much more amenable to visualization than traditional approaches. And it has a lot more in common with UML-based modeling than any of the examples mentioned in the thesis, IMHO.

The Art of the Propagator Alexey Radul and Gerald Jay Sussman

You have a short and more edible version here:

http://dspace.mit.edu/bitstream/handle/1721.1/44215/MIT-CSAIL-TR-2009-002.pdf

Context Driven Scheme Objects

DCI is for programming the behavior of systems of interacting objects.

The frog example looks like the behavior of a single (frog) object and is a DCI borderline example if its behavior is triggered from its environment. Self-triggereng, autonomous frog objects sounds like Alan Kay's Vivarium, the ecology-in-a-computer project. This is outside the scope of DCI because DCI currently only deals with sequential programming.

DCI documentation, examples, and downloads at the DCI home page
http://heim.ifi.uio.no/~trygver/themes/babyide/babyide-index.html

Pros and Cons?

@TrygveR:
what would you say are the biggest selling points with DCI?
And where do you see it fall short (if ever)?

this is not dci

I should probably have been more explicit and said that this was not DCI per definition. I personally found this to be exciting about DCI:

- Separation of DOMAIN-DATA and BEHAVIOR (using traits)
- Create better graphical interfaces, using the ROLE/TRAITS.

DCI

biggest selling points:
¤ User's mental model conforms to programmer's mental model conforms to code.
¤ Readable code.

And where do you see it fall short:
¤ DCI doesn't provide a convenient home to capture business rules.

Otherwise see DCUI home page
http://heim.ifi.uio.no/~trygver/themes/babyide/babyide-index.html