DRAKON-Erlang: Visual Functional Programming

Functional programming is based on useful and practical ideas. Unfortunately, there is a problem with how functional programs look. They are often hard to read.

Improving the visual appearance of functional programs can make them easier to understand. One way of doing it is to combine some existing functional language with a graphic notation.

DRAKON-Erlang is an attempt to do so. This hybrid language can be described as Erlang that uses DRAKON for flow control.

Each of these two technologies have been successful on their own.

DRAKON was developed within the Russian space program. It was used in Buran, Sea Launch and other space projects. The strong point of DRAKON is that it makes algorithms easier to understand by relying on ergonomics.

Erlang is one of the most widely used functional languages. It started its path in telecom and later spread out to many other industries. Erlang is well known for its simplicity, reliability and built-in support for concurrent programming.

Read more: http://drakon-editor.sourceforge.net/drakon-erlang/intro.html

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Your visual formula for OR

Your visual formula for OR seems to have the labels swapped http://drakon-editor.sourceforge.net/drakon-erlang/bool.html ?

Your visual formula for OR

Nice catch! Thanks. Fixed that.


assuming it is a good idea, there are things about it that could perhaps be improved, of course :-) like the salad/meat/potatoes example makes me wish the left-to-rightness was enforced. is DRAKON alive enough that people are pushing on with the idea, to try to refine it along such lines?

1. Enforcing

1. Enforcing left-to-rightness is a good idea, thanks!
2. DRAKON is alive enough and gaining momentum. Any ideas on improvement are welcome!

I find the mix of implicit

I find the mix of implicit data flow with visual control flow awkward, and perhaps even backwards. (The `first_to_upper` example has such data flow via the FirstUpper and RestLower variables.) I believe the visual emphasis should be on the data and data flow (cf. learnable programming).

Also, it isn't entirely clear how such implicit data flow interacts with your switching constructs.

Speaking of the first_to_upper example: code that implicitly lower-cases the rest of the string, without clear documentation or naming, is perhaps not the best way to advertise "clarity above all".

There are at least two major

There are at least two major dimensions to an algorithm:
1. Control flow.
2. Data flow.
They are tricky to show at the same time. But both are important.
I'd like to defend the rights of flow control. Flow control is the decisions made by the algorithm. Decisions are critical.
And DRAKON is excellent at expressing decisions.
At the same time, I also want to see the data flow in a graphical way.
Some researchers (Dmitry Dagaev) suggest to combine DRAKON with FBD.

Control flow is an artifact

Control flow is an artifact of imperative expression of algorithms. While the ability to express decisions is essential for solving many problems, there are ways to express decisions in data (e.g. sum types). A highly declarative language may only need data flow.

It is not clear to me that you could combine DRAKON with another diagram model without violating the invariants proposed for clarity, e.g. you mentioned "no crossing lines".

Control flow is basically

Control flow is basically any sort of time-based decision. Control flow exists because programs execute over time...the question is then whether control flow can be adequately abstracted away. You can always encode control flow with data flow (just add "control lines"), but this is not necessarily cleaner or better. There are some abstractions that can encode various operations that would otherwise require explicit looping (mapping, folding), but these are not general. In a language like Haskell, you basically wind up re-introducing explicit control flow via monads where it is needed (though it is obviously discouraged).

The "no crosssing lines" constraint is interesting: it imposes an upper bound on program complexity based on aesthetics. I would be very interested in hearing about what those programs look like, and if you have to subvert your principles to express the programs that programmers want to write.

It very important to note

It very important to note that there is no upper bound on program complexity with DRAKON.
You can express ANY algorithm without line crossings.
See the silhouette construct.

I'm not referring to Turing

I'm not referring to Turing completeness, but there must be an upper bound on what can be reasonably expressed before the program becomes too hard to write/maintain, as there is basically in any programming language. In other words, how does your language scale with respect to complex programs that programmers might want to write? Could you (or have you) feasibly write the Drakon compiler/run-time/UI in Drakon itself?

Visual/graphical languages are more susceptible to low feasible expressiveness ceilings given their spatial syntax/connectors (the problem of literal spaghetti code), and (often) lack of very good abstraction capabilities; especially if the language lacks names and scopes, though DRAKON seems OK here. So you have this restriction to prevent spaghetti, but the consequence of this is not really obvious.

DRAKON is not "my" language.

DRAKON is not "my" language. It was developed within the Russian space program.
So it withstood the test of time and complexity.
And I only wrote an editor in DRAKON-Tcl. The editor compiles itself, by the way.

Does the Russian space

Does the Russian space program still use DRAKON? What do you mean by "it withstood the test of time"?

Has DRAKON been used for many projects outside of space programs? What do you mean by "it withstood the test of complexity"?

DRAKON is still used in

DRAKON is still used in space projects. For example, here: http://npcap.ru/en/.

DRAKON gained recognition in many other fields:
- ERPs
- Microcontroller programming.
- Nuclear power plant control systems.

DRAKON is also used outside programming:
- For specification of medical procedures.
- For explaining laws and tax regulations.

Unfortunately, most of the available information is only in Russian.

As far as I know, the author of the language has plans to release a book in English.

About "the test of time". The first version of DRAKON appeared in the mid 80s.
About "the test of complexity". The first application of DRAKON was the BURAN spacecraft. Space shuttles are very complex systems.

An interesting note. All stages of flight of BURAN were fully automatic. Those included an aircraft-style landing. For a spacecraft of that size, it required complex avionics and AI. NASA was also able to do that, but some 18 years later.

Control flow is not a

Control flow is not a consequence of programs executing over time. Rather, the converse is true. Programs executing over implicit logical time is a consequence of control flow abstraction. Control-flow expresses a stateful model: you have implicit state (e.g. a program counter, maybe a stack) indicating which part of the program is in control. The change of that implicit state introduces time in control flow abstractions.

There are many ways to express execution over time that do not need control flow; cf. Lustre, Dedalus.

While replacing control flow with dataflow isn't "necessarily" better (one could intentionally do worse!), dataflow has the potential to solve many problems with considerably less state, resulting in simpler and safer programs with nicer concurrency and partial failure properties. Further, when developers decide control state is necessary (e.g. to model a business workflow), necessary state can always be introduced and controlled explicitly.

In my experience, declarative models tend to be very expressive and flexible for explicit control flow. The state is closer to essential. Less effort is expended working around the dominant control flow model. (Control flow is about control within a program, independent of the problem domain.)

Alas, dataflow expressions of control flow tend not to be as performant on modern hardware, and quite difficult to optimize. Many programmers never see past that... not even when they find themselves explicitly reproducing entire control flow models in order to achieve persistence or distribution (at which point declarative expression would have been equivalent or better).

In a language like Haskell, you basically wind up re-introducing explicit control flow via monads where it is needed (though it is obviously discouraged).

Similarly, tail call optimization can be understood as a means to implicitly support control flow abstractions. I do not consider Haskell a highly declarative language... at least not as it is used in practice today. Haskell does embed several FRP models, and my own RDP, which are considerably more declarative.

Our definition of control

Our definition of control flow is different then. In my case, control flow exists somewhere even if the programmer is not explicitly exposed to it: someone executes the rules or gates the electrons. "Make me a cake" is a fairly declarative instruction for a task that involves a recipe with fairly involved control flow.

I am tired of seeing control flow transcoded with declarative abstractions and people calling the result still declarative because they have their defined their own flip flops and instruction counter. Enable lines in a data flow language are a bit better, but still deceptive, if you have to work with too many of those, you would have been better off in a language with stronger control flow abstractions.

Make me a lie

Taken one fine-grained instruction at a time, an imperative program may appear declarative. E.g. if I say `x := x + 1`, I haven't specified how it happens. I have only commanded what happens. By your definition, is this therefore a "declarative" instruction? I think it would be. But programming is not about individual instructions. (If it were, we could trivially write every program.) Programming involves composition, decomposition, arrangement.

Where does "Make me a cake" fit into a larger program? How is it decomposed into smaller subprograms? You suggest it would decompose into an imperative recipe, but that seems to be an assumption you've made rather than an essential property.

Similarly, "declarative" and "imperative" programming styles must not be about individual instructions. They are also about composition, decomposition, and arrangement of expressions.

When I scrutinized the popular "what not how" notion of declarative, I found it ridiculously naive and inconsistent. Declarative programming, if it is to exist at all, MUST be about structure of expression - structure, not content. Declare the what. Declare the how. Declare the why. But what does it mean, structurally, to declare? My reasoning led me to focus on properties of ideal declarations: idempotence, commutativity, associativity, and continuity.

In attempting to refine informal concepts to something more precise, of course I'll run against inertia of popular definitions (no matter how poorly conceived). Descriptors such as "object oriented" or "declarative" have positive connotations and marketing power. It seems everyone wants a piece of that. I'm a member of the everyone club.

I ask that you please reconsider use of the inconsistent "what not how" definition of declarative. Use my definition. Or, if you wish, develop another consistent meaning, and share it. Or just avoid the word.

That aside, I agree with you that some people misuse "declarative" to describe scenarios where they've built their own flip-flops or program counter and are now developing atop that. Or, more commonly, some silly Haskell fanboy is insisting his monadically composed programs are "purely functional" because he's confused programming with staged evaluation.

Programming is something programmers do, not something machines execute. It's about the abstractions, not the implementation.

If we program by imperative composition implemented atop a declarative model, we're doing imperative programming. But, unless you're trying to have your cake and eat it, the converse is also true. If we program by declarative composition, we're doing declarative programming, regardless of whether it is implemented above imperative control flow abstractions.

To say "the control flow exists somewhere" is at best an observation of modern hardware, and not an essential property of declarative programming.

The cake is a lie.

Declarative is not black or

Declarative is not black or white; its actually a property of all encodings: the question is not "is this declarative?" but rather "how declarative is this?" Its about detail: how much do I need to deal with to do a task? "Make me a cake" is very declarative, while the recipe for a real cake is obviously less so, and becomes even less so if we have to deal with the chemistry of the ingredients or the construction of the cooking oven. Composability and such does not constitute "being declarative," but allows you to preserve declarative properties on reuse, while formal properties help hide details that need to be known on reuse.

But this is a digression, let's go back to control flow.

Explicitly dealing with control flow means being exposed to a big detail, which contributes to being less declarative. The control flow never goes away: someone wrote your rule interpreter, your compiler, someone wrote the VHDL code for the CPU that is executing your code. The implementation is always programmed (or at least constructed)! You can program at many different levels as you see fit, or you can even program multiple layers of abstractions on your own. Or you can choose not to deal with control flow yourself, its up to you.

Control flow exists somewhere is more of an observation on the universe and natural physics rather than modern hardware. We will never be able to transcend this unless we somehow are able to transform the time dimension into a spatial one.

The extent to which a

The extent to which a program is declarative is certainly not binary, given the ability of programmers to switch between modes of expression and levels of abstraction as conveniences them.

[Declarative is] about detail

Not at all. To say an expression is declarative implies nothing about its level of detail. (cf. costume porn [trope].)

"Make me a cake" is very declarative

When read as an English sentence, "Make me a cake" is clearly imperative. In which language is it declarative?

the recipe for a real cake is obviously less [declarative]

"The" recipe? Which recipe? Why do you assume there only one? Have you never even tried writing a declarative recipe?

The control flow never goes away [...] more of an observation on the universe and natural physics rather than modern hardware.

I disagree. Natural physics is very declarative by nature. Physical things "are". They shape the world around them, and are shaped in turn, just by their coexistence. Some things shaped happen to be dynamic in nature, such as the flow of water or wind... or electrons. There is no fundamental need control flow concepts at bottom level.

DSPs, FPGAs, memristors, and other technologies (including analog) have potential - if they vastly reduce in price and increase in programmer accessibility - to offer effective mechanisms for embedding declarative expressions or data flows all the way down to the metal.

We must address time. It can be convenient to distribute many traditionally temporal elements along spatial dimensions. But we don't need to transform time dimensions into spatial ones. To avoid control flow at the lowest levels, it is sufficient to to transform spatial abstractions into spatial implementations.

When read as an English

When read as an English sentence, "Make me a cake" is clearly imperative. In which language is it declarative?

See, this is where we disagree. Obviously, "make me a cake" is a command, and commands should be imperatives right? So the king says a declarative "I'm hungry" which the minister understands as meaning "I want to eat cake", who then instructs the eunuchs to "make me a cake" who then follow a recipe to make a cake, relying on the oven that the craftsmen built and the ingredients that farmers obtained from labor + biological inputs, which were transported to the palace via couriers and such, etc... In the context of what goes into a cake, "make me a cake" is very much closer to "I'm hungry" then anything else downstream!

Your definition focuses on the command aspect (make me a cake now), my definition is focusing on the detail aspect (what actually goes into making a cake at each level?).

Natural physics is not declarative at the bottom, but there are some nice declarative properties that emerge from the bottom chaos of lots of stateful interactions (say, the law of gravity), just like a declarative abstraction (I'm hungry) can cleanly gloss over the complexity of imperative operations (cake making).

I'm claiming that is impossible to be "declarative" all the way down to the metal. That DSP must deal with flip flops eventually, and have you ever programmed a FPGA without resorting to flip flop constructions? Even data flow requires buffering, and managing that buffering involves...control flow that someone has to encode. Hopefully, this control flow can often be abstracted away and forgotten, but its still going to be there.

The declarative expression

The declarative expression of "make me a cake" isn't "I'm hungry". It is closer to "There will be cake, in my possession, baked by your hand."

There is no limit on declarative detail. I could declare the ingredients, and the designs in the icing. You seem to believe that there is a "detail aspect" that distinguishes declarative programming. I'm going to insist that you are simply, deeply wrong.

Yes, we need more detail (to provide it or generate it) at lower levels. That's just the nature of abstraction, orthogonal to declarative vs. imperative. (Are you sure you're not confusing "declarative" with "abstract"?)

Declarative does not mean "stateless". Declarative does not mean "deterministic". Declarative does not mean "simplified". If you stop assuming declarative is brain-dead by definition, you may discover there is much more to it.

Let's pick bones at the

Let's pick bones at the definition. So we have procedural and declarative memory, where declarative memory is about facts, figures, equations, and so on, while procedural memory is about learned unconscious skills that are pretty much reflexive.

Now, for programming, the opposite of declarative is no longer procedural, but for some reason it is imperative. But of course, this is not what Lloyd really meant, just how the definition grew to be rationalized. Whatever, the definition I'm working with is the what vs. how, which is definitely, as you call it, the nature of abstraction. "the recipe that starts on page 42" is an abstraction for an actual recipe.

If there are no limits on declarative detail, then any procedural/imperative program is declarative one, since don't those instruction declare what the program does? This is how we get into trouble in Haskell, as you begin to express ordered computation using whatever transcoding is available...but at least the instructions were "declared."

Invariably, as you go down into lower levels of abstractions, there is code that declares something that involves control flow and time, even if its just "check cake from oven after 40 minutes, remove if a golden brown color." How would you specify this otherwise? "Don't burn the cake"? Apply some equation based on size of cake being baked to determine length of time spent in oven?

My definition is useful for avoiding what I call the abyss of declarative programming. That somehow the world is all nice and expressible with nice math, a view that severely limits what sorts of problems we can solve.

If there are no limits on

If there are no limits on declarative detail, then any procedural/imperative program is declarative one, since don't those instruction declare what the program does?

Not necessarily. A program might declare intermediate and final results in excruciating detail, without ever saying what a program does. One can express constraint systems that have only one solution.

Though, declaring what a program "does" is not actually a problem. Declaration is about structure of expression, not content. "What not how" is hand-waving nonsense for people who don't scrutinize too closely.

This is how we get into trouble in Haskell, as you begin to express ordered computation using whatever transcoding is available...but at least the instructions were "declared."

Haskell is many excellent imperative languages, and a fine example that we don't need much built-in control flow. The trouble is a rare few fools who insist on calling a subprogram 'declarative' (or pure) when it is clear they have expressed it (and must reason about it) imperatively.

Invariably, as you go down into lower levels of abstractions, there is code that declares something that involves control flow and time

Yes to time and state, no to control flow. Conflating them is incorrect. Control flow is a feature of the programming abstraction, conceptually independent of time or state in the problem domain.

My definition is useful for avoiding what I call the abyss of declarative programming.

Indeed, your straw man definitions, examples, and arguments paint a bleak future for anyone who develops declarative programming conformant with your definitions. It's a good thing nobody is doing that.

I don't believe in

I don't believe in declarative programming as an absolute, so I simply redefined my fears away. I've worked on 4 languages in this area so far, and have progressively become more pragmatic each time.

We really are just arguing about technical terminology. I have declarative vs. procedural to your declarative vs. imperative, while your control flow is strongly tied to imperative programming while my control flow is more like flow control. I grant that your definitions are probably more popular, but I still don't think they are very useful.

You referred to declarative

You referred to declarative vs. procedural memory, in the biological sense, which I found interesting. The words in programming, however, refer to grammatical moods or modes of speech and expression (i.e. imperative, declarative, interrogative). Procedural programming describes a particular structured form of imperative programming. Declarative has the same, sweeping nature as imperative; particular declarative programming models include constraint-logic and synchronous reactive.

Even if you focus on memory, declarative vs. procedural is about structure not content. For example, we can have procedural memory of riding a bike, and declarative knowledge of how to ride a bike, and the two are very different in their integration and application but not their content.

Given your predictable reactions to discussion of "declarative" programming, I think you've failed to "redefine your fears away". While you've embraced a broad notion of imperative in your own work (and I infer you've battled some obsession with "declarative" that has left you a little bitter), I've noticed you spend more than a little time exclaiming about temporal bogeymen in everyone else's declarative models. :p

I guess I'm lucky I came from the other side. I never had intentions of developing a "declarative" programming model for its own sake. My efforts prior to RDP involved actors, kell calculus, mobile agents. But I found that properties such as idempotence and commutativity were useful for mirroring and scale, that properties such as continuity were valuable for revocation and resource management. Ever so slowly, as I began to understand why and how to handle state and time declaratively (especially without use of `new`), my semi-imperative models drifted towards fully declarative.

FWIW, I think that your

FWIW, I think that your definition of declarative is indeed more useful than the other definitions I've seen. Everyone seems to agree that defining a declarative as purely functional is useless because you can easily write a pure program that is for all practical purposes imperative e.g. using the ST monad. Defining a language or programming paradigm as declarative if it is idempotent, commutative, concurrent and reactive suffers from exactly the same problem.

Therefore we should not apply the term to programming languages, but to programs. Some programs written in Haskell have a declarative flavor, some do not. The same applies to virtually every language. It is true that some paradigms allow you to express a specific class of programs in a declarative way more easily, but with certain kitchen appliances it is easier to bake a tasty cake, yet we don't call those appliances tasty. Defining the declarativeness of a program as how close it is to its informal specification seems more useful and more in line with common usage of the term than the other definitions. For example if the goal is to make a cake, then saying "make a cake!" to one of your servants is more declarative than "mix 2 ounces of sugar, 2 ounces of butter, 2 ounces of flour, ..." (even though `mix` is commutative ;)). At least this is a property we actually care about, rather than a shallow syntactic property (like "purely functional" or "commutative") that is trying to approximate the symptoms of something we care about.

We don't need another synonym for 'good.'

I favor David's definition. Declarative should be about making declarations. Yes, you can do imperative programming in a declarative style ("You shall have done step 1, and next step 2, and next step 3."), but there still useful properties associated with the declarative version: order and repetition don't matter. This is analogous to what David repeatedly misses about the importance of being purely functional and why Haskell's approach has advantages over a more overly imperative language.

Structural Reasoning

I don't miss the fact that Haskell's approach has advantages over most imperative languages: constraining effects to the "spine" of the monad greatly simplifies reasoning, maintenance, and control of effects. Haskell uses pure functions and its rich type system to achieve this. But, having seen other means to achieve similar structural reasoning, I simply do not attribute this benefit to "the importance of being purely functional". That strikes me as confusing a means with an ends.

That little rib against me aside, I agree with your statement.

My rib

My rib was a hastily worded reference to our recent discussion about whether Haskell is purely functional, because I think the situations are analogous. Haskell is purely functional because the semantics are about immutable values. That's useful even though some of those values are essentially imperative programs and the language can be used to build them in a way that is essentially imperative programming. Similarly in a declarative language you can be declaratively describing imperative steps and can be programming in an imperative style.

Abstraction is language

Abstraction is language manipulation. Programming is abstraction manipulation. I'd prefer to characterize certain abstractions or modes of expression as purely functional, or declarative, or imperative. Not full languages.

The base does matter, of course, having an enormous impact on the paths of lesser or greater resistance.

Keep this in mind: the reason Haskell is better for imperative is not because it has a purely functional abstraction type, but rather because it is too difficult to syntactically abstract `liftMk` and inject appropriate `return` operations across every expression in the `do` notation... and the many people who tried thus failed to make this mistake.

(Also note: there are many languages that lack pervasively mutable state, that emphasize immutable values, yet that also lack pure functions. Those are orthogonal concepts.)

It's not a synonym for

It's not a synonym for 'good'. Good can mean a variety of things in various contexts: fast to execute, cheap to build, free of bugs, etc. "similar to informal specification" is just one of them.

Deep Syntactic Properties

Achieving those "shallow syntactic properties" has a very deep and pervasive impact on semantics. Don't underestimate the power of syntax; it shapes everything programmers create.

It is not simulation that results in cargo cult programming; rather, it is a failure to understand or appreciate depth and context.

even though `mix` is commutative ;)

Indeed it is! And if the ingredients were instead subprograms or subcomponents, `mix` could make an excellent declarative operator.

I have a similar operator in Sirea, with the symbol `|*|`. This represents concurrent composition of behaviors while dropping the response signals, i.e. such that the behaviors are expressed only for their effects. It's excellent for setting multiple agents on a case: `fred |*| velma |*| scooby`.

"...which the minister understands as..."

I call shenanigans. Isn't that sentence the essence of conflating syntax and semantics?

For that matter, isn't what the king says also connected to how he says it, which you are expecting us to understand as human beings, but are also expecting us to ignore in your interpretation of the king's declaration as being a command to assuage said hunger?

Just to muddy the waters some more, if the king were playing chess, and made that "declaration," it would not have been a command, except in the form of warning his opponent that the king intended to play more aggressively.

My point being, that you are taking "a declaration" of state, and conflating the declaration with resultant behavior set up to respond to the declaration.

tl;dr - "I call shenanigans."

Enable lines in a data flow

Enable lines in a data flow language [...] if you have to work with too many of those, you would have been better off in a language with stronger control flow abstractions.

I disagree for a few reasons.

First, injecting control data is quite trivial in a bidirectional data flow model such as RDP. Developers don't need to fight the system to introduce "enable lines" even late in development. Similarly, control data is relatively trivial to introduce in temporal logics or rules systems (which are not the same as dataflows, but can be declarative).

Second, the workflows encountered in many problem domains tend to require very ad-hoc states, waits, and conditions. It is rare that a control-flow model is immediately well suited to external problem domains. If you build-in control flow abstractions, that becomes another thing for developers to work around. You can see this common practice today: event loops, IoC frameworks, ORMs, etc..

Better to save your users many headaches and build in only a declarative model and access to state resources. Developers can support ad-hoc workflows without the abstraction inversions and accidental complexity of directly integrating diverse control-flow models.

Crossing Lines

The "no crosssing lines" constraint is interesting: it imposes an upper bound on program complexity based on aesthetics.

I agree that DRAKON's constraint against crossing lines is interesting.

Though, I'm not sure it would work for data flow. At least for RDP, I make considerable use of `swap` and `mirror` behaviors to reorganize parallel data pipelines (wires) into a proper format for a stage of processing that takes more than one input. It seems to me that a "no crossing lines" constraint would easily become a global constraint, difficult to modularize.

But perhaps it would be more acceptable if crossing lines was an explicit behavior:

   |   |     
 ++|+++|++     x y
 |  \ /  |     | |
 |   X   |    bswap
 |  / \  |     | |
 ++|+++|++     y x
   |   |

I've been considering approaches to avoid crossing by trading it for auto-wiring based on matching labels or roles (types), perhaps scoped by location or proximity. Traits-based composition of declarative objects would be a good fit for my envisioned programming aesthetic. At the moment, I'm at a loss for how to model auto-wiring effectively in Haskell without sacrificing Haskell's compile-time safety. I know I can achieve staged safety (using Data.Typeable). But I'm fully very satisfied with that.

Data flow and control flow

there are ways to express decisions in data (e.g. sum types). A highly declarative language will only need data flow.

Pattern matching over sum types (and inductive types, more generally) is control flow, by any standard. Data flow and control flow are dual to each other[1]; arguably, neither is superior to the other. One could also argue that "combining" data flow and control flow on the same diagram cannot really be done in a satisfactory way, i.e. that any such attempt will necessarily introduce inelegancies and kludges in the visual language. (Albeit, it seems that DRAKON-FBD uses a workable approach, in that some action and condition boxes are expressly identified as "subdiagrams" of the other type.)

[1] E. S. Bainbridge. Feedback and generalized logic. Information and Control, 31:75--96, 1976.

Data flow is not dual to

Data flow is not dual to control flow.

The duality Bainbridge observes requires an assumption of pervasive state (expressed through closed feedback loops with unit delay). This pervasive state is not essential for data flow or pattern matching, yet is essential for control flow. Also, the flow charts and networks Bainbridge uses are not prototypical of control flow or data flow.

Data flow is most strongly characterized by continuous propagation of data through a relatively static computation. That continuity typically represents streams of values (e.g. streaming a file) or changes in a value over time (e.g. whole file as time-varying string). Leveraging sum types, one can model switching networks in a data flow system. (Note: "pattern matching" is a different concept, and isn't required.)

Control flow is characterized by describing which parts of a program are 'in control' over time. This concept is stateful by nature, and also is rather awkward for concurrency. Control flow has no built-in notion of continuous data streams or updates, nor even of input (other than receipt of control).

Control flow and data flow are informal concepts. It is difficult to make formal claims about them. Duality, in the category theory sense, is a very simple, important, and formal relationship between models. While the idea that such a relationship should exist is appealing, I do not think you will find such a simple relationship between control flow and data flow systems. At least, you won't find such a relationship without a lot of stretching and questionable assumptions.

Data-control flow

Pattern matching over sum types (and inductive types, more generally) is control flow, by any standard. Data flow and control flow are dual to each other; arguably, neither is superior to the other.

It seems that the duality you're talking about is basically the duality of sum types and product types, and the diagramming problem is the same as the one for MALL proof nets.

While I'd like to think arrowized operations on sum types are part of the "data" flow, I've been confused about the meaning of "control flow" for a while, and your explanation finally gives me a usable definition.

Data-control flow might be a good term to use for the generalized graphs David Barbour and I are thinking of.

One could also argue that "combining" data flow and control flow on the same diagram cannot really be done in a satisfactory way, i.e. that any such attempt will necessarily introduce inelegancies and kludges in the visual language.

Well, it's a tall order to ask for a programming language with no visual kludges. ;)

I'm finding data-control flow diagrams are a nice mental model for me to align my programming abstractions with concepts of category theory and linear logic. At least a couple of times over the last few months, I've been able to change a master-slave relationship into a symmetrical relationship, and this has revealed opportunities for simplification and code reuse. The modeling technique is all in my head for now, but a graphical view of my code might come in handy.

Visually speaking, I think of a data-control flow diagram as a two-dimensional graph that sinks into a third dimension with a many-worlds interpretation. Some edges are absent in certain worlds, and the possible worlds are arranged in such a way that they minimize the edges of the resulting 2-manifold. This manifold is essentially made of toilet paper: Products are multi-ply, while sums have perforations.

Many worlds

Any ideas on how to visualize many worlds (probabilistic) data-control flows?

Few worlds

That regrettably adds another dimension of possibilities to the diagram, making it four-dimensional.

The only reason I get away with three dimensions is because I imagine a small number of sum-typed values overall--as in, maybe two. If we encoded three bits as a product of sums, giving eight possibilities, they'd probably need to be arranged by Grey coding in order for the diagram to be comprehensibly continuous.

In practice, I might try to use opacity as a substitute for dimension. A sum type would correspond to a perforated plane with a different opacity on each side:

                   ,;########/  A
A + B   /=======:
       /_______  \
               \  `-------,
                \________/   B