Evolution of mainstream programming language paradigms

I have been thinking recently about what makes the mainstream language a mainstream. And I think that the mainstream programming language paradigms generally evolve using the following evolution pattern:

  1. Complexity Pain: complexity of reasoning about some aspects of the programs growth fastest with program growth. At some time amount of efforts required to reason about some piece of code becomes unreasonable.
  2. Virtual Structure: Organizing program according to virtual structure that makes reasoning about that aspect of the program easier.
  3. Explicit Structure: Develop language, that makes virtual structure explicit by introducing additional meta-level constructs, that organize constructs of the previous levels into the structure.

In paradigm for mainsteam languages we could obviously see the chain when the following level organizes the previous one.

  1. [Conditional] jumps, memory assignments,and register arithmetics (assemblers).
  2. Statements (FORTRAN).
  3. Blocks, procedures, and data types (C, Pascal).
  4. Classes (abstraction over procedures and data types) (C++)
  5. Components (abstraction over classes and interfaces that differenitates techinical and intential aspects) (Java, C#, VB.NET, VB+COM, JavaScript[only halve way here])

As we see here, the next generation always abstracted previous level, and it was exactly one level abstraction. The functional languages so far do not fit into this picture.

The LISP was not one step jump, they are many step jump. Many features of LISP got into mainstream (the latest were garbage collection and lambda expressions), but there are some things to integerate (like internal DSLs). So LIPS was a kind of peak experience for programming language paradigms.

To me Haskell and ML did not offer appropriate one-step abstraction over structured programming. ML modules were only half-step abstraction over data and procedures, since data and functions were still different concepts. There were also halfway abstraction of data and behavior in the form of lambda expressions, but it was a single-method object, but GUI toolkits (current complexity challlenge at that time) required many-methods objects for appropriate abstraction.

If we look to the sequence, the next step should be some abstraction over components, and I think it would be component systems. We could see the early component systems as libraries now (Spring, GUI toolkits, EJB3, Plugins in Eclipse and IDEA, UML2, etc).

I have written more details why I think so in this document. It also contains an initial list of requirements for the possible next generation mainstream language. I think it will be a component system programming langauge (CSPL). So the document is named CSPL Challenge. I think typing and and type-checking component systems would be a quite interesting research topic. And maybe there are already some results out there, that fit requirement exactly.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I agree that languages

I agree that languages should evolve to make code easier to reason about, both formally and informally.

However, abstraction does not necessarily make things easier to reason about. Often, as with mutable state hidden by interfaces in OOP, it makes it harder.

OOP allows you to build abstract machines with potentially infinite observable states. While this is great for representing a small number of concepts such as IO devices, it is terrible for anything else. A potentially infinite machine is the hardest thing there is to reason about, and so is a terrible representation for most things. But OOP asks you to model every concept in your program in this way. Whenever you consider that OO programs are giant cyclic graphs of abstract machines whose effective state is recursively combinatorial with its dependencies', you realize the trouble you are in. This is so for reasoning about things informally and especially formally.

So OOP is a good example of how languages can use abstraction to evolve in the wrong way. It's as though imperative abstractions have been futile attempts to neutralize the complexity inherent in stateful mutation by wrapping it up. I think we're finally coming to realize that no amount of wrapping is going to fix the underlying problem; we need a fundamentally different approach. Thus ML, Haskell, et al. And thus perhaps their lesser reliance on piling up indirection.

Finally, to my knowledge components are merely OOP-style objects being used correctly in tandem with appropriate reflection constructs. Components, unlike typical objects, DO represent truly abstract machines that can be legitimately discovered / replaced at run-time. So long as components are used only where appropriately, they may have some raison d'etre in computer science. However I don't see how much additional linguistic support is needed to use components this way. I've never need any of these remote service frameworks, so I have no useful opinion on your proposal. Fortunately as a game programmer, I can tell you that components are the least of my worries. I primarily care about whether or not code I write and maintain every day can be reasoned about without excruciating pain and hopeless error. For this, mainstream languages and techniques have been a 'pit of failure', whereas FP languages and techniques have been a 'pit of success'.

OOP vs Functional

These are very nice questions.

1. IMHO OOP enabled to reason about infinite event-driven programs like: daemons, IO, kernel operations, and GUI. Games also should fall nicely here. In contrast, the previous generation worked ok for batch programming (compilers, batch processors), but it was very hard to reason about event-driven programs using reasoning concepts of the structured programming.

The problem was that the structured programming is about top-down control flow composition. In event-driven application we have several logical processes that happen at the same time and are affected by external events (showing a dialog window is a separate logical process, showing the ok button in it is a sub-process). We cannot reason about the entire program using decomposition based on procedures, since the next code to execute is selected not by top procedure, but by incoming event (unless the program is really simple like echo service).

The IO request is done by application program that invokes kernel, rather than happens at will of some procedure at the kernel. And if we have two programs running at the same time, the order of incoming IO requests is undefined.

For event-driven application we need to decompose the process of handling events into sub-processes of event handling. The object is a process of event handling. It has state and it reacts to events (method invocation). It also delegate event handling to the other objects/processes. Note that Simula came from simulation domain (so there was the task of process decomposition and abstraction). Does it make the objects to look more logical to you?

Also note that ML and Haskell do not help there much. They are still focused on decomposing evaluations rather than processes (what was a real challenge). They treat the stateful processes as the second-class citizens (btw games are full of stateful processes, aren't they?). So what they have offerred did not match actual need of programmers in commercial environment (event-driven applications). However ML and Haskell offer quite refined tools for batch programming over complex domains (for example, compilers), so they are extensively used here. And I guess that the ML was designed by academic community that felt different acute pains than commercial community. Different pains led to selection of different next step that solve different problems.

Of course, the event-driven applications have some evaluation and processing tasks in it. And this where ML would work fine, it would also addressed the primary challenge: process rather than evaluation decomposition. On other hand OOP still supports evaluation decomposition, and beginner OOP programmers quite often stick to evaluation decomposition of the program, until they learn to do process decomposition and even apply it over-zealously. Scala is nice compromise: process decomposition for the program and evaluation decomposition where it is suitable.

2. There was also meta-transition between OOP and Components, so components transcend and integrate objects.

  1. Component instance is a coherent system of objects
  2. Component instance logical life-time is different from life-time of objects that is is composed. For example life-time of EJB Session Bean could be greater than life time of the application that created it. Also different parts of the single component could live on different virtual machine (remote proxies)
  3. Component has many facets (usually: facet for consumer, facet for container, internal facet [sometimes event more as in the case of EJB2])
  4. Component usually has a separate type, that usually assigns different roles to classes, interfaces, and methods. And the role affect behavior of the component.
  5. Component classes implements intentional aspects of functionality.
  6. Container implement technical aspects of functionality according to the component definition.

It is possible to do component-based programming in OOP, but it is a separate level of meta-abstraction. Component is about objects rather than just a sum of objects. It is like the objects are about procedures and state rather than just procedures and state.

3. I have not heard of major event-driven games written in functional languages, but I'm not a game developer. The games are full of stateful event-driven processes that need to be coordinated. Of course, there are siginificant evaluation parts, but they usually invoked from event-driven part. So to reason about game we need to reason about event-driven part before reaching evaluation part. My only guess about your comments about objects is that some game engine is used that takes over event-driven process part of the game. In that case the game development might be mostly doing evalution parts. If you mean that event-driven process part should be also written in fucntional language, I'm interested to see how you suppose to do it. AFAIR GUI and such were traditional weakness of purely functional approaches, they usually use some binding and stick to procedural style at that places.

You might be interested in

You might be interested in my work - I am attempting to show that OOP is not a good option for any of the above in the face of an appropriate declarative functional language here and here. I've learned over the past half decade that OOP is not the right approach for simulations, or most of what it's used for in the commercial game world.

Of course, there is much work left to do as I've only started on it a couple months ago. Plus my thesis could turn out to be fundamentally flawed - but we will see.

Complexity Estimates

Regarding your post about the complexity curve, I would like to see some justification for your complexity estimates - quadratic, n log n, linear, log n.

Feature interaction isn't merely quadratic; one must consider consequences of three or more features interacting.

These estimates are borne

These estimates are borne entirely from intuition and feeling. I'm afraid I could offer no rigor to back it up, in terms of hard data or useful math. If one were to disagree, I could not argue my case further.

I would love to hear rigorous approximations for the benefits of these if anyone could propose them. I've not got the background to do it myself without some help.


log n

How can n features be implemented in less than O(n) time or LoC?

Honestly, I don't know what

Honestly, I don't know what the hell I'm talking about when it comes to graphs. But basically, I'm trying to take exponential complexity and reduce it to either n log n or n. How I came up with log n, I do not know. I attempted to fix the post to remove this blunder.


You are assuming that feature is completely independent of the others so that their cost is additive. If features overlap in functionality then there may be some reuse in their implementation. In this case sub-linear complexities would result. In the somewhat weird and contrived case that each feature costs half as much time as its predecessor the total work would converge to a constant amount of time.


Feature interaction tends to increase complexity, not reduce it. And the expression problem serves as a classic example of how introducing a new feature might have a linear cost or a cost proportional to the existing body of code depending on how it projects into an extension structure.

Sub-linear complexities are possible, as you say, if you carefully constrain the set of features you add. But I would not be optimistic about it happening in practice. Nobody has the foresight to build an optimal extension structure.

I think the phenomenon of

I think the phenomenon of ever-increasing (usually exponentially so) complexity is mostly true for established languages where nothing can be taken away, whereas sublinear and even negative complexity increases are usually only found in experimental languages that stumble upon a simplifying generality.

E.g. initially a language offers features A and B. Then later it incorporates features C and D, and each of A, B, C, and D are special cases. Then finally the designers discover feature E which actually subsumes of all of A, B, C, and D in its generality. If E's implementation can replace all of A, B, C and D at a lower total complexity, then the language and its implementation actually becomes simpler.

Growing Pains

We should address change complexity rather than some metric of size. Large software systems are still simple if they remain easy to reason about, extend, modify, compose, reuse, maintain.

Absolute size does contribute to complexity, of course. More code often means more code to peruse, grok, modify. But the hope is that the paradigm we choose will minimize the impact of code size with respect to a change. Of course, whether that actually happens depends on how the change projects onto our language and model.

Given this understanding, we cannot generally say that feature unification reduces complexity. We must account for how the change impacts reasoning, or shifts complexity elsewhere. Feature E that subsumes A, B, C, and D will be more `powerful` than any of them, and may consequently be more difficult to reason about. That said, simplifying unifications do exist. They're even common at the application layer. Applications experience relatively predictable growing pains as they mature, e.g. introducing persistence, concurrency, extensibility, configurability, etc..

We won't ever measure a negative `change complexity` no matter what the change. But if you take the derivative - the change in change complexity - I suppose you could measure negatives there whenever you find a way to simplify the system.

You are reasoning about

You are reasoning about first and second derivatives of complexity and it seems that the original paper is more concerned with the actual complexity value.

If I delete a line of code and replace it with nothing, you seem to be arguing that the change itself introduces complexity, since we might need to reason about how the deletion affects the system as a whole. That might be a fair point, but in the limit you are arguing that the entire history of a codebase affects its complexity now. This is only true if humans have perfect memory, which they do not. At some point the horizon of the past fades and it does not matter how the system came to be the way it is now.

I believe the above reasoning is more in line with the paper's spirit than what you are arguing.

Complexity Pain

Change complexity is not the same as the first derivative of complexity. For example, if we measure complexity by lines of code (a dubious practice) and if we delete 250 lines and add 200 lines, then the `change in actual complexity value` would be -50 whereas the `change complexity` would be +450 (or more, to account for reading and reasoning about code).

I would not say that change complexity `adds` to the complexity of the system; rather, it is a function of the system's current size and structure and of the change we wish to effect, and perhaps of the executor.

Constantine doesn't discuss complexity directly. He discusses `complexity pain` and `structure`. Complexity pain is the "complexity of reasoning about some aspects of the programs growth" - and I freely substitute `growth` with `change`. It seems to me that `change complexity` metrics are where we'd want to start, and what we'd want to optimize.

My take is that you simply

My take is that you simply cannot optimize one factor of software to the exclusion of all others if you ever want someone else to make use of it. And if it is good software, someone inevitably will.

I think your choice of optimizing change complexity (in isolation it appears) is a poor proxy for valuable properties. If you only make something easy change, what have you? Does it do anything useful? What properties does it have now? Is a promise of a property better than a property? You are only making it easier to move the software towards some other goal that you might have later. And if you don't at least have an inkling what those goals are, I'd posit that you're probably better off not writing anything just yet.

Some software goes down exactly the path you describe. The designers add in all kinds of mechanisms to offer new dimensions of extensibility so that the "change complexity" of future changes will be small. But most such dimensions never get exercised. The price exacted is a large increase of the complexity of the system _now_ for the vague promise that making a change later _might_ be easier.

I have personally seen this happen so many times that I've come to the belief that most dimensions of extensibility are ill-conceived, and of the remaining ones, the patterns and mechanisms we employ to achieve them are inadequate.

We just don't seem to anticipate change very well in software. Instead, we should some choose some actual goals, not illusory ones. For example, correctness, performance, size, complexity, scalability, testability--all of which can be measured in the present.

Language Properties

I agree with most of what you say. I am not suggesting that we should optimize for future changes to the neglect of other properties.

However, I do suggest that `change complexity` is the more relevant complexity property. It is often better to have a complex and large software system than a small but complicated one.

What I would like to see is a language design that makes it difficult to tie my hands or paint myself into a corner by accident. That way, as a regular application or API developer, I don't need to think hard about the problem - the path of least resistance will allow later extension or other change.

Do you take "complex" and

Do you take "complex" and "complicated" to mean different things? Otherwise, I don't agree that a large complex software system is better than a small complex one.

I think that evolution of a codebase is 80% tools (particularly refactoring tools, version control, and bug-tracking) and only 20% language design.

Complex vs. Complicated

Eric Berlow explains the difference in less than four minutes in this TED talk.

I think that evolution of a codebase is 80% tools (particularly refactoring tools, version control, and bug-tracking) and only 20% language design.

Sure. But language design strongly affects tool design, so this really isn't a separate concern.

Mmmm curry

So complex is a baguette, and complicated is some kind of almond fish curry. Got it.

I can't say I understand the

I can't say I understand the finer details of your argument, but the thing that seems to bring me clarity on the wider issue of software develop-ability is this mental model -

The simpler the system, the easier it is to change and extend.

Perhaps that's an over-generalization tho, so please let me know if I'm overlooking something.

I agree

As long as we're just worried about time and not LoC, it's easy to go sub-linear to log(n). You write one feature on the first day, two features on the second day, four features on the third day, etc. That's why it's important to hire productive programmers. And if we're talking about lines of code per feature, that looks trickier at first, since you might expect at least one character per feature, but I don't see where we specified a bound on characters per line.

I guess what I was trying to

I guess what I was trying to get at is this:

We might suppose an average initial feature requires 1000 loc. So we'll say that a unit of code is 1000 lines. Then an additional feature could leverage a bit of the existing code, and might only require an 500 lines, or one-half unit. So on until each additional feature takes only few dozen lines. I think this is what they are achieving in the STEPS project.

So if each unit of code is 1000 lines, then it could be reduced to something like log n code units per feature.

Sorry that this wasn't clear, or if it involves some sloppy reasoning.

Makes sense

I was actually wondering that's what you had in mind when I asked about log(n) -- so, O(n log n) code/time for n features. My response to Andrew Moss was flip (I can be pedantic, too! ;)), and not directed at you.


As the directed recipient I liked your flip comment ;) I was only pointing the limits of behaviour, not what we really see. In the spirit of further pedanticism I have to point out that in the comment you are responding to if the additional effort per feature settled to a constant (such as 12 loc) then asymptoptically the feature addition is O(n).

Arguments without character

So your earlier scenario can hold only so long as we eventually start reducing features to half a character, a fourth of a character, etc.

But ample margins

Reduced, as the width of nested LtU comments.


I did wonder if a sub-linear complexity would imply that, but could not decide either way. There are some strange classes such as log* that made me unsure.

I have looked at your

I have looked at your language, and its applicabitlity seems to be quite limited. It looks like it could handle only evaluation tasks. And I do not see how it could be use to handle in a modular way a situation when many logial processes come and go (spawning and killing monsters for example).

Also, it looks like even writing as simple as notepad is out of te scope for the language. Is it correct? Am I missing somethig obvious?

You're correct it does

You're correct it does nothing useful yet - it isn't ready for use. I've started implementing it only a couple months ago. But as to what it is designed to do, you've must have missed it.

DOL, for example, will be a declarative replacement for simulation and UI programming in general, and will exist as a language module for AML. Game simulations were the reason AML and DOL were created, so if you don't understand how it will work yet, please wait until I get something to demonstrate it. Will probably take another month or so.

AML allows you to implement any semantic model with a language module. There is no limit to AML with the appropriate language module. For examples -

Want to make AML, a purely functional language, into an imperative language with mutation, sequential statements, and loops? Write a language module. Maybe 200 lines of F# code.

Want AML to provide a decision engine like prolog? Write a language module. Maybe 400 lines of F#.

Want AML to provide a shell scripting language with side-effects that are transactional? Write a language module. Maybe 300 lines.

Literally any semantic model you can require can be done with a language module for AML. How semantic models will be combined in AML remains to be seen, but seems straight-forward.

AML's true power will become clearer as progress is made - I've only just now implemented the language module mechanism, and only now started on DOL. Some patience will be needed as progress is made. The one thing I cna agree to is that AML is NOT a general purpose language. It is more like a 'lifting' language. You write a language module to 'lift' a domain / semantic model into AML, then implement your program's features in AML.

But I don't want to further hijack this thread. I'll make a dedicated LTU post for further discussion and dissection when the implementation is ready.


Your components sound

Your components sound like...objects, you are just comparing your specific kinds of components to specific kinds of objects (maybe Java objects?). Go back and review Szyperski, his book remains the definitive discussion on the topics you are getting into. But to be honest, your proposal seems like a rehash of what we went through ten years ago (events, components, aspects) where we failed to reach much of any kind of clarity. Your focus on EJB2 further sends shivers down my spine :)

How about other connectors beyond dumb events? We have continuous frp signals, or perhaps brookian behaviors (used to do the former, now working with the latter). There is a whole lot of simplification that could also occur with "components" to get us out of our mess...actually looking at the current mess doesn't really help so much.

Components vs objects

EJB2 is very interesting technology for analysis, since it looks like it was an attempt to do "right thing" from component-based point of view to the point of disregarding all other aspects.

If you meant the book "Component Software – Beyond Object-Oriented Programming", then I have read some edition of it long time ago. Maybe it would worth to reread it. AFAIR it is focused on varieties of fish (components), and I'm more interested in the evolution of pond ecology.

I think that components are a generation jump from objects. The components allow a new way of reasoning about software. They separate intentional and technical aspects, so we could reason about them separately. Note that even plain java objects have technical (garbage collection, finalization, synchronization) and intentional aspects (variables and methods). The technology is obviously based on OOP (which allowed new way of reasoning by decomposing processes), but the component != the object. IMHO such trick of separating technical and intentional aspects is somewhat harder in functional programming.

We could see that it a generation jump by the fact that they facilitated mass appearance a new class of the applications: distributed applications (client/server (Servlets), web services, etc.). It is not yet clear whether component systems will facilitate mass appearance of new class of the applications, but they certainly improving the programming for the distributed applications even in their current emerging form.

I value achievements of component-based programming very highly. Any approach inevitably leads to solutions, that are so complex, that they are impossible to deal with within that approach (this is a kind of Peter principle for software). And if we count by functionality points, the component approach allows creating more complex system than plain-OO or structured programming. And I think that we should look into the current mess. Since this mess contains implicit structures, that could be discovered and made explicit. One of such patterns that I see are component systems. The are not yet explicit. For example Spring does not allows many meaningful operations on it. It actually like the time, when early FORTRAN did not allowed meaningful operations over subroutines.

I think that different component connectors is should be part of generic component algebra. And component algebra implies component types. That makes me think types and algebra of component systems would be just closure over types and algebra of components rather then a new generation for the languages.

BTW I guess that frp is "functional reactive programming". I have not goolged out anything related to discussion for "brookian behaviors". Do you have a link?

Brookian Behavior

Sean is referring to Rodney Brooks - in particular his subsumption architecture, which has received recent discussion here: Elephants Don't Play Chess. Sean has worked with the model in YinYang (which is inspired from Kodu, which used subsumption).

I don't favor FRP or these subsumption behaviors, and instead have developed RDP behaviors. But the distinctions are perhaps too subtle to grasp if you're still chained to an assumption of message passing (or `dumb events`). The essential bit is that there is a lot of other ma to explore and that much of it seems better suited than messaging for concurrency and inversion of control and the abstraction of frameworks or your `components`.

Component vs. Framework

When you said `component` earlier, you offered a long list of examples, and those examples would reasonably be called `frameworks` from the perspective of state-of-the-art OOP. Frameworks, today, are ad-hoc messes of classes, relationships, boiler-plate code, software design patterns, inversion of control. An individual object is not a framework, but might be said to `participate` in a framework or represent an `interface` to a framework. Even in the role of an interface, objects would often have difficulty capturing the static type models often associated with frameworks (though dependent typing might be flexible enough for that role).

So what would it mean to abstract a framework? Well, I think it means we'd want the ability to pass a framework as a parameter, store it to a variable, develop generic code to compose frameworks, process collections of frameworks, and so on. I would also want to compose frameworks to create larger frameworks, in general at runtime - i.e. first-class frameworks. Basically, I want frameworks to look a lot like objects today. In a rough but practical sense - perhaps one embraced by Sean - the `framework` of this generation becomes the `object` of the next.

Abstracting frameworks will introduce a number of cross-cutting challenges - especially with respect to static types, resource management, and inversion of control. As we include `composition` of frameworks among our goals, the IoC issue will naturally introduce concurrency (since we'll be combining two or more frameworks that each want control, or concurrently commanding two or more frameworks to effect our influence).

There are many partial answers to these challenges. Static typing can be achieved through first-class constructors with abstract types (i.e. existential typing, objects as modules). Concurrency, consistency, and IoC challenges seem well met by monotonic or temporal models (reactive, event calculus, temporal logic, temporal concurrent constraint, etc.) - and, of those, reactive with bi-directional communication (for influence and observation) seems closest to modern OOP.

The resource management challenge can be simplified if we can manage some variation of RAII suitable for reactive observers across frameworks. It is pursuing this along with multi-cast optimizations that led me to develop RDP.

Szyperski's component book

Szyperski's component book is the best we have so far on this topic, even if I don't agree with it completely. You seem to be heading in the direction of this book, so definitely do a reread since you might not have had the same context on your initial reading.

Component is a vague a term as object, I don't see how the distinction helps us move forward in understanding. What you call component, I can call an object, as my definition of object extends way beyond what are Java objects. But we are just playing word games at this point, it would help if you could not rely on the word "component" so much to explain your idea and rather look at the underlying problems/solutions (e.g., explicit external connections between software entities).

We studied/explored external linking language in the early 00's (you can look at the related work section of my Jiazzi paper for a bibliography). It seems like you are proposing this again, nothing wrong with that, but you should have some idea of what went wrong with the assembly approach. Either the assembly language is too limited, and you can't really express much, or it is too powerful and becomes a programming language on its own...in which case why do you really need two powerful programming languages in one project?

The other problem: we never found a way to make components meaningfully plug compatible, that assembly always entails a specific component and not one from a class of components. In this case, assembly is just composition and it is more fruitful to just make composition in our programming languages better. This is why I shifted my PhD to FRP signals to connect "objects" together, and why I'm now looking at Brooks' subsumption architecture to further increase modularity in reactive systems.

tl;dr The component-lense has been a trap in the past that has masked deeper problems and more fruitful solutions. The problems you are attacking are very, but you might be limiting yourself by promoting vague components as the solution.

Components and Functional

You say that Haskell focuses on decomposing `evaluations`, but I'd note that's not really idiomatic Haskell. Rather, most Haskell code is all about abstracting processes, and composing abstract processes. Those `processes` are modeled using monads or arrows or ADTs of many varieties - for reactive programming, for GPU programming, for imperative programming.

In a sense, pure functional already achieves what you say you need: it is readily capable of abstracting many components. Dependent typing could go much further than Haskell - one clear weakness of Haskell is that it is difficult to abstract complex type-safe components without extensions like GADTs and a lot of hackery. But Haskell is not the whole of pure functional programming.

What we actually need is integration of independently developed components, which each have their own model. We also want a variety of compositional properties to simplify reasoning about the resulting system.

For Haskell, the de-facto point of composition between components is imperative IO. This is unfortunate because IO is difficult to reason about at the best of times and grows worse as we introduce concurrency, partial failure, reactions, distrust, distribution, resource accounting, etc..

I think pure functional will remain part of any good answer for abstracting components. But we need a more effective substrate than imperative IO. Such a model should do two things:

  1. Make it easy to integrate independently developed components.
  2. Make it easy to develop readily integrated components.

The first point, taken to its extreme, suggests that the toplevel should be some sort of orchestration language. The second point means ensuring the integration aspect is considered early and often in the development of each component, perhaps via the module system.

My work on RDP has been in pursuit of such a suitable programming substrate.

Let's consider the listed

Let's consider the listed examples.

Linked imperative examples are all related to batch execution model, where we have a set or inputs and need to produce a set of outputs. The capability to do so is inherited from structured programming by both OOP and FP. I would even argue that FP is an ultimate development of structured programming. All features of structured programming are refined to their essence while keeping the same rules of reasoning about the program. Even the pattern procedure pointer + void pointer is refined as lambda expression.

GPU programming loos somewhat unclear to me, since I'm not familiar with OpenGL. But the samples look like a normal imperative programming to me. And the samples are too small to evaluate decomposition and easy of reasoning. For example, Java3D looks like much more declarative and modular to me.

I have only basic knowledge of FRP concepts, and I have not seen any complex examples implemented using it. Some notepad example would be nice (loading saving files, tracking that file has changed on title bar). The examples that I have seen were extremely incomplete, and this incompleteness seems to be caused by the fact that FRP is incomplete model for cooperating processes. FRP seems to be focused only on processing event streams. The creating and modifying components seems to be out of scope of FRP. If I'm wrong, I would like to see a reference with examples.

Possibly the problem is with the name. Real GUI programming is not reactive. It is a proactive. Before reacting to event, GUI program provide place where user could do something and generate an event as result.

So neither of the examples demonstrate process-based decomposition. The all demonstrate work-based (evaluation-based) decomposition that we have seen at structured programming. And I do not see how mentioned approaches help to reason about the program in terms of interacting processes.

I have not seen a GUI program written in a functional language that was written in functional style and were easy to understand. The real GUI program written in ML look like a real mess and that mess feels the same as the mess of typical GUI program written in C. If you have a good sample of reasonably big functional GUI program, I would like to see it.

Also functional programs look like a closed gardens. I know no functional program that allowed reasonably rich system of dynamically loaded plugins, that are written in the same language as original program. This is also an interesting topic that also likely related to program composition and reasoning rules.

BTW if look carefully to standard Prolog, it also has work-based composition rules and it suffers with the same problems when things get to GUI (sometimes even worse due to pecular interactions with backtracking). Constraint programming somewhat changes the way the program is reasoned about, but these changes mostly affect local rather than global reasoning about the program.

I do not say that functional programming is useless. The functional programming is very useful for the tasks that could be decomposed according to work to be done or according to evaluation order. And any sufficiently big task has a lot of such subtasks. However it looks like classical FP consistently fails in the areas where we need to decompose the program according to interacting and dynamic processes. And in commercial settings there are a lot of the tasks that require process-based decomposition. This is possibly a reason why the industry do not see functional languages as addressing its needs.

Open System Dynamic Processes

It seems you're conflating several concepts - process (may remember or communicate), dynamic (may change at runtime), open system (decentralized authority and maintenance).

I agree that FRP is an `incomplete` model. It can model processes. It even model dynamic processes (via a switching or eval behavior, where a signal or event describes another behavior). But it isn't very suitable good for modeling or interacting with an open system.

That isn't a property of FP, though. It's a property of FRP.

Many process models are suitable for open systems. The vat model I described to you six months ago is one, also used in E. Cloud Haskell is another, with a more Erlang style to it. My RDP model is close to FRP but much more suitable for open systems. Actors, threads, and event loops have also been modeled with functions.

Now, you might complain that these aren't functions. And, indeed, they aren't. But the modeling and composition of abstract processes is still idiomatic functional programming. Which is my point.

Big FP GUI programs tend to look like big imperative ones. This isn't a necessary property. But imperative composition (via monads) is the easiest way to integrate with native GUI libs (or adapters like Wx) and is the most common substrate for such integration.

GHC Haskell does support runtime plugins, and runtime compilation if you include the right libraries. They aren't features most people need or want, though.

The physics of reactive programming

Just a reminder that we've discussed something similar this before...

Open Systems is the challenge, Processes is a solution

This discussion gets really interesting, since I better understand generation transitions. And there were few surprising insights for me about academy vs. industry collisions on programming languages.

I do not know dynamic is related. I have just noticed when I were writing the answer. Component-based generation seems to facilitate dynamic composition of the systems and it heavily relies on integrated OOP to do so. So dynamic challenge might be part of the challenge for the post-OOP generation.

Note that Open System is exactly the challenge that lead to process-based decomposition. The GUI application is a open system that interacts with input devices and output devices. Operating system kernel is also open system that interacts with hardware and applications. In both cases the top-level application is modeled as a process and inside the application is decomposed to sub-processes. The OOP process-based decomposition is either explicit using OOP language, or implicit (as with case of UI toolkits written in C).

OOP languages still support work-based decomposition, but they also add a new dimension - process. The work units are not just composed in the tree, but they also marked to which process they belong. This additional dimension what makes OOP a new generation of structured programming. Component-based programming also adds new dimension: intentional-technical.

I do not see how classical functional programming adds new decomposition dimension to the structured programming. The question is : Why Big FP GUI do not find it economical to introduce own abstraction of the entire GUI and use it? Why is imperative programming the simplest way to do so? IMHO this is because UI tookits use natural decomposition dimensions for the task. And if we add these decomposition dimensions to FP, we will get OOP in disguise (FP with objects).

Process-based decomposition assumes that process has many entry points over common state. And the collection of these entry points need to be passed around, and that leads as to objects. And for compile time verification we need to type objects and provide templates for the their creation. And this leads to interfaces and classes (which are specialized patterns existential types usage). So when complex enough RDP-based systems that are created (I guess that even notepad would be sufficient) to require decomposition, the same pressure that will lead to the similar solutions.

Decomposition rules is not something that arbitrary imposed on the program, they actually path of least resistance considering complexity pressures. New decomposition rules appear when we become aware of new complexity pressures. And we could feel these new complexity pressures only when we make programs big enough for them to be painful.

So if we look at the decomposition dimensions chain, we will see the following:

  1. Statement/step-oriented programming. Decomposition dimension: steps(statements).
  2. Structured programming. Decomposition dimensions: steps(statements) + work (blocks/procedures).
  3. OOP. Decomposition dimensions: steps(statements) + work(blocks/procedures) + processes(objects).
  4. Component-based programming: steps(statements) + work(blocks/procedures) + processes(objects) + intentional/technical(components/containers).

So each new generation adds exactly one dimension. So component systems do not look like a new dimension, unless I'm missing something. They looks more like closure over intentional-technical dimension.

And I currently see classical functional programming as transitional between structured and OOP. There is a process-like construct in the form of closures. But closure provides only a single faced for multi-facetted construct. But possibly I need to dwell on this idea a bit more.

OOP vs. Functional

For event-driven application we need to decompose the process of handling events into sub-processes of event handling. The object is a process of event handling. It has state and it reacts to events (method invocation). It also delegate event handling to the other objects/processes. Note that Simula came from simulation domain (so there was the task of process decomposition and abstraction). Does it make the objects to look more logical to you?

Functional programming languages can model event-driven processes as coinductive systems. Bart Jacobs has a book draft discussing this topic and exploring connections to such fields as modal and temporal logic. But you may be right about the functional paradigm being excessively linked to algebraic/evaluation reasoning, as opposed to modeling processes using coalgebraic tools.

I have only have a first

I have only have a first look to the book. Note that book models state as one big lump. This might ok for formal reasoning. But this implies that we do not have boundaries for the proofs. We need to consider the entire state in proofs, since we do not have state isolation boundaries.

And the process decomposition model looks like incomplete. I have not seen the acknowledgement of the fact that processes have different lifetype, can be created/destroyed, affect each other in some way, or fail. Actor model is somewhat better in this respect, since it at least models actor state changes in a more modular way, so state of all processes is decomposed rather than one big lump.

Objections to acting recursively

Process decomposition is better handled by Actors than the objects that most people and languages understand, but Alan Kay meant the former anyway, and Joe Armstrong came to realize that, too. Bob Barton spoke of recursive design, and Gilad Bracha demonstrated with Newspeak that objects could provide recursive modularity. The question that haunts me is: When will first discover, in the mainstream, what Barton, Kay and others already knew?

IMHO, if we look from

IMHO, if we look from logical point of view Actors is a subset of OOP model. Actor is an active object that accepts only asynchronous events. It is nice to have a formalization of such interaction style, but it is an OOP style.

Also it would be very useful to distinguish big actor model and subactor model (that is done by almost none).

  1. In big actor model each actor has own inbox. (Erlang)
  2. In subactor model many objects share inbox and collectively implement behavior of single big actors. (E, GUI event loops)

Subactor model is possibly most widely deployed, since GUI event loops are used everywhere. Note that GUI event loop with all components that live in it behaves as actor. It is even used to synchronize access to resources (for example, VFS API in IntelliJ IDEA).

Subactor model is quite trivially implemented with OOP: GUI component libraries, E programming language, AsyncScala, etc.

Big actor model is more complex from point of view of implementing components, since there are less clear internal decomposition rules.

There are decomposition within actor, only delegation to other actors.

It is easy to implement actors and events in big actor model. But it is more difficult to do explicit decomposion of asynchronous operation, since there is only delegation style decomposition is available. The promizes allow cheap decompoistion of asynchronous operations, and they amore usable win subactor model then in big actor model (see E and AsyncScala, also GUI programmers have done a lot of useful applications with bearable pain).

So after I have implemented AsyncScala, I do not believe that Actors are a generation jump from OOP. They follow OOP ideology and they follow OOP decomposition rules (both subactors and bigactors). Promises aslo do not add much to the decomposition rules, sine they simply restore top-down procedural decomposition (but over event streams) that was absorbed rather than denied by OOP. Without Promises actor-based OOP is simply incomplete.

Previously on LtU: Tim Sweeney

Relevant, especially with respect to games:


I have seen this

I have seen this presentation before. The presentation is very nice. And I actually like many of the ideas. However these ideas are more about enchancing of the concepts of existing generation (with possible exception of dependent types, which require second thought, since I have used only in Coq), rather than about generation jump.

IMHO the generation jump should feature the following:

  1. Provide a new way to decompose the program, that transcends and integrates the previous way.
  2. Simplify reasoning about program behavior along of decomposition rules.

What is described by Tim Sweeney does not seems match this criteria. And the described features looks more like language enchancement rather than generation jump.

I shared many of Tim's

I shared many of Tim's concerns before I read these slides, and AML is my attempt to address several of them. Ultimately, I want to go even further than what Tim is looking for for simulations. With a purely functional declarative object language (DOL), it seems we should be able to not only arbitrarily parallelize simulation code, but should be able to place much of it on the GPU automatically. So AML and DOL is ultimately very ambitious, though the first release won't have any automatic parallelization since I just need to ship the smallest useful thing as soon as possible. That, and the first release may be too slow to be practical for big simulations :)

the next step

the next step should be some abstraction over components, and I think it would be component systems. We could see the early component systems as libraries now (Spring, GUI toolkits, EJB3, Plugins in Eclipse and IDEA, UML2, etc).

As far as GUI toolkits go, I'm certain that the NeXTSTEP has already been taken. ;)

Anyhow, you're effectively suggesting that the next step is abstraction over frameworks. This introduces many challenges - inversion of control, composition of frameworks, and eventually the same extensibility and modularity problems we have with objects. I've always felt that reactive programming will be essential for abstracting frameworks. Reactive systems offer a formal notion for inversion of control, and would also support sane composition and interaction of multiple frameworks.

The goal of abstracting frameworks has contributed greatly to my own efforts with RDP, but I'll grant I'm also aiming for a many-step `jump` that also covers issues such as concurrency, distribution, resilience.

halfway abstraction of data and behavior in the form of lambda expressions, but it was a single-method object, but GUI toolkits (current complexity challlenge at that time) required many-methods objects for appropriate abstraction.

We have an notion of collections that can be abstracted by functions (e.g. records) and we can therefore have a collection of what you call `single-method objects`. Why insist on conflating the two otherwise orthogonal abstractions? Keeping them separate certainly has many advantages - recombination, attenuation, view transforms (lenses). I favor the separation to better support object capability model patterns and security principles.