Graphical languages of the Russian space program

I found this very interesting: What software programming languages were used by the Soviet Union's space program?

The languages DRAGON and Grafit-Floks appear to be graphical. You can see some videos of working with DRAGON.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I want to like graphical languages

I wish graphical programming languages were as dense as text languages. (Or if that can't be done, maybe we can show text languages in a graphical form?) Apologies if folks have heard part of the following story before.

I worked on one in 1993, on NeXT workstations, intended for use in building financial instruments on Wall Street, especially derivatives. (This was before the derivatives blow-up, at a company whose main selling product was screen-based derivatives software.) I changed the engine to make it object-oriented, by making it possible to conform to abstract interfaces, if you explicitly mapped method correspondences in the graphical UI. It was only necessary to get covariance and contravariance right in suitable places.

At the time I had been fascinated by the idea of graphical languages for about ten years -- you can insert whatever mysticism you want as explanation here to explain my hope this was a great idea. So I was excited to work on one, until I saw it used in practice, and realized the UI had a low density of relationship display per area of pixels.

I tried a few examples of writing someone's graphic code in text form to see what happened, and the text was always far smaller. More importantly, the graph was still there, just implicit in how name bindings work. That's when I had an epiphany of sorts. I decided programmers good at text languages could see the graph in the text. And folks mystified by programming could not see the graph, and that was the main problem. Of course I'm simplifying a bit.

My story's done after the last paragraph, but I still believe a related hypothesis that's hard to test, so I don't know. Occasionally I work with a less skilled programmer who can't see the graph and their work sucks, as if they attribute magical qualities to cutting and pasting text, without themselves being responsible for effects on the graph.

This is a great realization.

This is a great realization. Perhaps textual languages could better be supported by graph viewers! I know this really helps compiler development (graphviz is indepensible during development).

To describe my own related epiphany that I had while working on my dissertation: there is a big leap in cognitive load when the graph becomes higher order, where their are edges in the graph that you can't see in the code because they are establish via assignment or function arguments. This is when I realized that doing tricky things with functions wasn't necessarily a good thing, that we should strive for language constructs that keep the graph as first-order as possible.


we should strive for language constructs that keep the graph as first-order as possible

I believe there is a strong relationship between this constraint and point-free programming.

You mean an

You mean an anti-relationship, right? ;-)

I've always found the point

I've always found the point free syntactical style very clever in languages that chose it. But to be honest, while it's equipped with semantically attractive simplicity (i.e. minimalist, single implicit function parameter assumed, just a pair of terminal tokens at most needed to enclose all kinds of terms, etc) I eventually have a hard time past a certain nesting depth. Especially when the computation context is very homogeneous (same basic types vomit over and over again, or lists of lists, etc).

I think heavily nested point free style is less problematic in more heterogeneous computations (in terms of # of type kinds at play) and/or with more namespace clues to help the brain get around it.

With point-free programming,

With point-free programming, abstractions are simple subgraphs to be installed. So I meant a positive relationship.

Point-free doesn't need to be functional, though. It can be any form of composition by proximity or juxtaposition. The sort of mixin/traits approaches pursued by Sean seem to lean towards point-free.

Yeah, there is a sort of

Yeah, there is a sort of intrinsic beauty in concatenative languages:

concatenation of their words may either mean (imperative) sequential composition of effects on the environment's state (i.e., over time), say, a la ALGOL's compound statements in blocks, or pure functional composition (wrt to lexical scopes in their non-concatenative languages counterparts).

Or even, both: parallelization over time and space of two or more computation contexts of tasks to perform.

(At the language designer's discretion)

Maybe concatenation would be the very essence of computation after all.

But then, what would the initial seed be?

(speculative/rhetorical question, doesn't really require an answer ;-)

I believe there is a strong

I believe there is a strong relationship between this constraint and point-free programming.

I don't understand your feedback. My point was to keep the graph as explicit as possible. So for example, if your language can support auto lifting, people.Age is a list of ages and you don't have to write => p.Age). Signals and constraints are other great examples of how we can keep our graph explicit, but eventually we have to abstract over that graph. The nature of that abstraction is very important, is it high-order functions or matching on existing graph structure?

Edit: Ah, I get it now; I'm slow :)

select ~~ fmap

If the graphical language is of the "wiring boxes together to expose dataflow" type, then there is a nice representation of monoidal functors which can probably be carried over to non-monoidal ones by modifying it slightly and adding some restrictions.

In the same spirit, John Baez and Mike Stay came up with a nice way of representing "hom objects" in a closed category (see their Rosetta Stone paper), which could be used to represent first-class functions, at least in some contexts. (In all fairness, their representation was criticized for being ad hoc and relying on unclear correctness considerations.)

view oriented programming?

My several comments might seem disjointed, since I try to move on to new bits quickly without much follow-through. It's easier to expand parts folks find interesting. (I'm mulling over David's point-free comment and don't have anything yet.)

Sean McDirmid: Perhaps textual languages could better be supported by graph viewers! ... there is a big leap in cognitive load when the graph becomes higher order

Turning a realization into an actionable plan seems hard when it requires grasping a model in someone's mind: undefined specifics. Managing cognitive load is interesting but deceptively complex. It would be nice if there was some scaleable abstraction that allowed changing the granularity of view depending on scope chosen for some sub-problem analyzed. But sometimes leaving out detail amounts to lying when it steers you to invalid ideas.

I like the idea of graph viewers. More than one way of looking at things ought to help. But you might need a developer to specify how each sort of view is organized, since a machine might not be able to infer intended structure that's an emergent effect of another view. Then a compiler could say if two views were consistent. (A new class of compiler error -- "make your two views consistent" -- might antagonize developers.)

Though I work in C lately, an important part of docs is (ascii) sequence diagrams that express how a distributed conversation works, since otherwise code is impenetrably obtuse when no apparent local cause-and-effect is evident. I think it's too bad sequence diagrams are not part of the code, so the compiler can't help, and there's no automated tool to trace, audit, and visualize.

If programming were like a video game (here's where we have a William Gibson flashback), resolution of detail would vary as you moved about in code space. Coding in text often feels like needing to know everything at all times, modulo abstraction here and there, and slowly getting overwhelmed as scale increases and old code must be teased apart and reworked. Somewhere you eventually have an experience like a one panel cartoon, where two ends of a bridge approach each other misaligned and a guy in a hardhat standing on the edge says, "Oh, wow."

There used to be schemes to systematically avoid things like that -- Demarco's structured programming comes to mind -- but I'm not sure why we bailed on them. There was no consensus it can't work. Maybe internet time sent us all on a drunken binge, and the party has since moved on to web apps before dawn the next morning. Or maybe UML staked out that niche and asks folks for spare change if they make a mistake of looking in that direction. :-)

There is a problem that adding detail gets overwhelming, even if it's graphs. Progressive disclosure seems a good fit for how our minds work.

Programming and User Interface

If programming where like a video game (here's where we have a William Gibson flashback), resolution of detail would vary as you moved about in code space.

The further I pursue Live Programming, the more I find there is no strong distinction between programming and user interface... and hence no strong distinction between PL design and UI design.

Live Programming marries the fields of PL and UI into Programming Experience Design (PXD, a term recently coined, I think, by Sean). UIs are just domain-specific live programming tools. Every checkbox or textbox carries a subprogram that influences the system (though sometimes a trivial one, interpreted primarily by humans).

We should be adopting UI principles when we design our PLs and programming models, especially if we desire improved productivity and immediate feedback achievable with live programming. Progressive disclosure. Zoomable user interaces. Multiple and bi-directional views. Secure Interaction Design. Undo, and avoiding commitment. Support for ambiguous specification (i.e. ambiguous user input), and dialogs for clarification. Multi-agent systems, where agents include users and applications. Conversely, our UIs should be more like programs - mashable, composable, usable by software agents as easily as by humans; e.g. naked objects with simple transforms provided by a user-agent.

This relationship between PL and UI is something Sean McDirmid has explored more extensively and concretely than me, e.g. in coding with touch, but it is hardly a new thought. Alan Kay and Smalltalk are well known for marriage of UI and PL (even entering the 3D space with Croquet Project). Roly Perera, who explores interactive programming, pointed me to an excellent video of Ark (Alternate Reality Kit) from 1987.

I envision a world where I can live program (= command and control, with arbitrary precision) my robot army on the fly with my terminator glasses and a gesture-based interface. I've been developing an EyeAndHand concept based around object browsers and ZUI features (navigation, automatic layout), with virtual lenses (which can provide HUD extensions, world overlays or annotations, X-ray vision for certain virtual objects, view transforms, etc.), and virtual hand tools (brushes, hammers, keys, bags, even an `inventory` concept like a game - much richer than any clipboard) to support composition and manipulation of virtual objects. These aren't new ideas, but there is much to gain by formalizing them. And glasses offer a new, huge virtual space for our objects and visual programming - much larger than any monitor.

My RDP paradigm is designed to support my vision.

And RDP happens to be point-free - i.e. based on Arrow composition, with multi-agent shared state models for parallel composition of effects, and I'm even pursuing constraint-based linkers as an orthogonal form of point-free composition, which achieves much of what Sean aims for with prescriptive traits.

The reason I made it point-free is in part to address some concerns like progressive disclosure and avoiding discontinuity spikes in the Programming Experience. Rather than having structure and abstraction be two separate things, they should be the same thing - abstraction is structure.


As a passing insight, not meant to pit anything against anything else, but merely to observe an interesting commonality between seemingly diverse topics — I've observed elsewhere that Lisp gets a key part of its nature by collapsing the distinction between syntax and data. It seems reasonable to view syntax as in the realm of UI, and data as in the realm of programming — so that Lisp's equating of syntax with data is (in its way) an equating of UI and programming.

Lisp is only doing this

Lisp is only doing this partially.

First of all, sexprs are more structured than lists of unicode characters, but still less structured than actual Lisp syntax. Actual syntax like `cond` has to go through an encoding/parsing step to/from sexprs, instead of being a structured object itself. This is just like conventional syntax has to go trough an encoding/parsing step to/from flat text (although the step here is of course much bigger).

Second, most Lisps are still using lists of unicode characters to encode sexprs instead of using a structured editor. So the actual UI that programmers see is still the flat text.

So while Lisp is close, it's not quite there yet.

this and that

I would say Lisp isn't doing 'this' at all; it's doing that, a different thing that has parallels to 'this'. I don't think either is 'more' or 'less' than the other. Lisp is, as it always has been, off in a different direction from everything else.

Though I don't agree about the encoding/parsing step to/from sexprs, probably because Lisp in my world uses fexprs. All data is first-class, and all first-class objects have the right be evaluated.

I don't understand what you

I don't understand what you mean by your first paragraph. Can you make it more precise?

Encoding/parsing: take Tcl. Tcl is quite similar to Lisp with fexprs in many ways. A major difference is that instead of using sexprs to represent code, it uses strings. So you could argue that in Tcl code is data just like in Lisp. C code is also represented by strings, and C supports strings, so you could argue that in C code is data too. The important difference is that strings are a very unstructured and difficult to handle format. For example if you want to write a compiler or interpreter for C it's unlikely that you'll represent code as strings: you'll first parse the code to an AST, and then work with that. The same applies to Lisp to a lesser degree. If you write a Lisp interpreter it would be possible to work directly on sexprs, and relative to working with strings it would be quite convenient. But what you'd probably do is "parse" the sexprs into a more structured format. Instead of representing a defun as (list 'defun (list ...) ), you transform it to an object of type Defun with a name property and a params property and a body property. Even with fexprs you still have to take the code represented as sexprs apart to evaluate it, although you might not build a more structured intermediate representation as it gets discarded right away anyway. But the (small) decoding step is still there.

Perhaps we are speaking from

Perhaps we are speaking from such disparate mindsets that communication is simply failing to spark. To perhaps clarify somewhat —

C code is also represented by strings, and C supports strings, so you could argue that in C code is data too.

I, for my part, don't see any useful similarity there to what Lisp does; it lacks basically everything that makes (that facet of) Lisp of interest. This may be related to the difficulty over the earlier paragraph. One wouldn't (I would think) compare a semigroup to a division ring by saying the division ring is just not quite there.

I don't know how else to put

I don't know how else to put this so that the communication will spark. What makes sexprs more of interest than strings for representing code? That they are more structured. Is there something more structured than sexprs? Yes: ASTs.

I'm just saying that Lisp is not on the extreme end of the spectrum of "code=data". It's somewhere in the middle between the unstructured representation called strings and the structured representation called ASTs. For your math analogy, you could say that semigroups are far from fields, while division rings are almost there ;)

I should know better than to

I should know better than to attempt an analogy when there's already likely to be a mindset problem; analogies eventually break down even when everyone's on the same page.

I suspect I don't know what you mean by "sexprs". I'd disagree, on the face of it, with the claim that Lisp's data is less structured than ASTs; Lisp's data structures are ASTs. Key to the nature of Lisp is that evaluation of data structures is integral (in fact, core) to the computation model (this would, of course, be the fexpr-oriented version of Lisp computation as in Kernel). I also don't really buy that there is a spectrum of "code=data"; the relationship between code and data is at least multidimensional (and one might not be able to name the dimensions, in practice; it's likely some odd fractional number of dimensions without readily identifiable orthogonal axes).

Hmm... Maybe I'm just

Hmm... Maybe I'm just stating the obvious, but I think code and data are essentially the same specimen of a thing from, and only from, the perspective of a tangible Turing machine you have built beforehand in some hardware, be it mechanical, or electric, or electronic, or whatnot that is faithful to Turing's machine model.

I believe only the human eye can decide to make a distinction and delimit it, between code and data, likely much arbitrarily and varying from one person to the other.

Otherwise code vs. data are just both binary strings you feed the aforementioned tangible TM with, down the road to produce a third one (base 2 string) and that's computation.

It is more interesting to wonder about what makes our ways to look at languages in either roles (code role vs. data role) I think.

Probably, that's where you and Jules have different points of view to do so (?)

Take Common Lisp's

Take Common Lisp's defun.

Structure of array of characters:

(, d, e, f, u, n, _, f, o, o, _, (, a, ), _, (, +, a, 1, ), )

Structure of sexpr:

[defun, foo,  ,  ]
            /   \ 
           /     \
         [a]    [+, a, 1]

Structure of AST:

   /  |  \
  /   |   \
foo   a   +
         / \
        a   1

Where the first line coming from defun is labeled "name", the second line "args" and the third line "body" (was too hard to do in ASCII art).

Each of the above is representing code as data. The difference is how much the structure is directly represented, and how much structure is encoded in some way. Of course the situation is complicated, but if you want to distinguish between Lisp and C and say that in Lisp code is data but in C it isn't, then you're probably going to make a distinction between how sexprs represent the structure of code while strings do not. If so, then you can make the same distinction between sexprs and ASTs that are represented as structured objects (e.g. a defun object with name, args and body properties).

Quoting from YShap:

I believe only the human eye can decide to make a distinction and delimit it, between code and data, likely much arbitrarily and varying from one person to the other. [...] It is more interesting to wonder about what makes our ways to look at languages in either roles (code role vs. data role) I think.

Exactly. I think that the most important human distinction is how structured the representation is, i.e. how tailored the data representation is to representing the code that we want to represent. Hence why C strings don't really qualify but sexprs do, and structured ASTs even more (at least from the practical point of view of how easy it is to manipulate code as data programatically).

That does give me a clearer

That does give me a clearer notion of what you have in mind when you say "sexpr"; thanks. Though I've a feeling the difference between sexpr and AST above is mainly in how you're depicting them. This however strikes me as not too far afield from discussing how many angels can dance on the head of a pin. We can, perhaps, agree that a key concept here is structure; a character string has no intrinsic syntactic structure to speak of, and very nearly all character strings would be considered illegal if one attempted to treat them as code. Lisp is closer to making all data values legal code, and Kernel is closer to that than most Lisps — as I've remarked, I see the right to be evaluated as part of first-class-ness.

It's a solid point, noting YSharp's remarks, that the difference between code and data is ultimately one of intent. Lisp structurally precludes a great deal that wouldn't be cogent as code, but even if all Lisp values are evaluable, some would result straightforwardly in errors (an unbound symbol, say, or a combination whose car doesn't evaluate to a combiner); and whether evaluating a value would cause an error is not always a predictor, either way, of whether it's meant to be evaluated.

It's a solid point, [...]

It's a solid point, [...] that the difference between code and data is ultimately one of intent.

Nice, both of you got me right. :) Indeed, I love these debates as I can often relate them to this (maybe crazy) idea I have had for a while, now.

I'd like to make a closed world assumption and to think that we can maybe find another use for URIs and reify such intents in language designs and how it could translate into more friendly-cooperative implementations.

My intuition is under this assumption, staying feet on earth, there is no fundamental reason why we couldn't do better in lowering software chaos by reifying such notions now we've equipped ourselves with a global network infrastructure and naming system.

Hence, I told myself, four years ago, why not just try starting to think about it, since we can show easily that a term rewriting system where the constants are language names and intents prior reified (in URIs) has the nice property to be strongly normalizing (hence, of course NOT Turing complete but that's OK and desirable since under this assumption humans and their chosen naming authorities are to remain in control of how to define what is computation to delegate to machine; the effort idea is to bring more order in language phrases, NOT to devise the "ultimate" machine, would be silly I think)

E.g., via flattened T diagrams traditionally used to denote compilers bootstraps:

(L I M) (I T J) |- (L J M)

Exercise for whomever it isn't non-sensical: find a) where to put the languages and intents URIs, and b) where/what to recognize as resolvable ones (URLs) to acquire the corresp. computational resources.


Code in C

Functions in C are not objects, and are not represented by byte sequences.

Which makes it a problem to argue that code in C is data.

You make a number of very

You make a number of very fair points.

Let me contribute in the same vein with this I've figured for a a good while now. To be pedantic, it's about yet another form of impedance mismatch between, say, 2D shapes-based software artifacts (representations thereof anyway) and textual's (mostly one-dimensional).

Have you ever wondered how come we can by now so easily zoom-in / zoom-out from/to pretty complex graphical instances and yet, still not quite achieve, as you mention, the same density as found in text ?

Doesn't that seem somewhat paradoxical at first sight ?

I think regularly about the same metaphor, and of course still haven't found the most satisfying answer to it. And that's where I think the "impedance mismatch" between graphical and textual representation capabilities for software is.

I see large scale software design and implementation issues' complexity (be it accidental or essential or a mix thereof) as a very heavy muddy heap that is our shared burden. We try hard to somehow dissolve or get rid of that heap while also trying to satisfy our users. That's the heap of the applications' complexity, essentially. That ugly heap sits on some sort of a large board that's laying under. Call that board our few allies, our "elements of certainty", in PLT and/or software engineering if you wish. Say, the foundations. The most important research results and/or the most well proven empirically built platforms and components.

But it's not just about the ugly heap and the foundations, which are both legacies, the former most often seen as the "negative" one, while the latter, is most often very welcome to be reminded of/rediscovered/re-studied.

There are also an important pole and its pivot :

Thus, we ought to think also of all these important positive principles : referential transparency, separation of concerns, side-effect free computations, loosely coupled components, etc, etc.

All these are often rather abstract although they are also often easily explained with a short and nice example to explain them. They are powerful as they have a low barrier entry to be understood even by junior programmers after a while of suffering enough in the debugger because of some code that completely ignored them.

So, that's how I see abstraction :

it is like that lever and its pivot : a raw, bare and simple, more or less long, but always robust, resilient pole we can use to lift the muddy heap off the ground and start kicking hard into it (the heap) to break it down into chunks, and smaller bits to get rid of (when we're lucky) and make our burden lighter.

The longer the part of the lever is on the opposite side to the heap (relatively to the pivot), the luckier we get when using abstraction(s).

It seems to me that for some strange reasons, abstraction via text-based artifacts still manages to get us farther off the pivot than graphical ones, for the same heap of complexities to deal with (on the opposite side of the pivot).

How come ?

Yes, abstraction is like that lever, I think.

And what does the pivot represent? I feel it's similar to the relevance of applying our problem-solving force thru one particular abstraction (or a few) we know of, to reduce one of the complexities and the harm it causes.

Where by "force" I mean of course:

force = time = money and/or "sweat"

as if they attribute magical

as if they attribute magical qualities to cutting and pasting text, without themselves being responsible for effects on the graph

Ah ! I can sooo much relate to this experience of yours.


What's a non-graphical programming language, anyway?

I find it interesting that "graphical" languages are intimately associated with boxes-and-arrows diagrams, and when one brings them up, the discussion often shifts to the question "can boxes-and-arrows really be better than text?"

I'd like to propose a different point of view: all programming languages (that I know of) are graphical, and text-based languages are just a subset of them, restricted to the sequential arrangement of a fixed set of symbols.

Graphical language then no longer means abandoning the proven technique of text-based syntax, but rather freeing ourselves from its apparent limitations, and asking: "if text is already graphical, how can we augment or combine it with other forms of graphics?"

Just one example are the many recent block- or tile-based languages, Scratch, Waterbear, and Blockly. I think they have or could have tremendous benefits for new or not-natural-born programmers, because they eliminate all syntax errors. But they are not boxes-and-arrows languages, rather they are very close to text-based languages.

What I want: the ability to use text and/or other, richer graphics, as it best fits the program. Imagine if the mathematicians of old had been restricted to the use of ASCII!

Just a subset

all programming languages (that I know of) are graphical, and text-based languages are just a subset of them, restricted to the sequential arrangement of a fixed set of symbols

You present a value judgement there. Some restrictions are liberating. Is text one of them? Would graphics be better modeled as a bi-directional view on text, provided by a user-agent?

But you bring up another valuable point: "graphical" languages are intimately associated with boxes-and-arrows diagrams. There are strong connotations that graphical programming = structured editing. But with physics-inspired programming abstractions, there isn't any reason this needs to be true. Proximity and locality in a multi-dimensional space can easily guide interaction between elements, i.e. more like a game or MOO.

Destroy preconceptions and prejudices. And don't forget to kill your darlings. (Some keep rising from the ashes, stronger. Be sure to kill them many times.)

I like to use "graphical

I like to use "graphical programming" language to mean graphical encoding of what are basically vanilla text-based abstractions. But then we can further divide that into block/tile, diagramatic, spreadsheet, and dataflow paradigms. Iconic would be an alternative to graphical, and then some languages are actually based on animated metaphors (toontalk).

What you probably want is a rich text programming language like Fortress (at least for reading). If you want rich text input, pen-based languages offer the most hope but that field is kind of dead.

Applying visual computing to advance software engineering

I may contribute insight to your community as to the possible future for graphical programming and generally applying visual computing to advance software engineering to a true systems engineering discipline. Much of the industry's effort over the years has been in replacing text syntax with graphical syntax, therefore polluting the possibilities for advancing language semantics for a true paradigm shift for modeling and executing general systems in software. Also, most graphical language research has ignored the shift in computing that now includes both parallel and concurrent interactions across both cores and nodes as we transition into manycores and an Internet of Things (IoT). Any practical graphical approach needs to take this functional integration and global scale into account. For a list of such requirements, I offer up the link. Be glad to offer insights into an advanced graphical programming architecture having spent many years in first principle research in this science rabbit hole.

any interesting tidbits to offer?

Insight is generally welcome. Thanks for the link, but I am often a lazy man. :-) I would rather pick a single topic and unravel a thread of ideas. In your work, is there any problem you see? Those are often good places to start a discussion: problems are interesting.

The system I altered slightly in 93 was functional with immutable dataflow, and had a goal of parallel and concurrent execution (specifically aiming to settle bank books: they were thinking big). Actually, you could update values to provide new input, which invalidated anything downstream depending on it. It was lazy and evaluated only parts needed to yield outputs requested. So sometimes folks think about the issues you noted. Its designers (I was not one) were folks who had previously worked on hardware CAD systems, with a dataflow signal propagation perspective I admired.

(I don't think they recall me fondly though. I left after a year to land a job at Apple that had vanished earlier in a late 92 hiring freeze. Made me a villain.)

Drakon Editor

I see this link was not posted yet -

In particular, check the PDF detailing the rules of using Drakon. Nice.

This is great

I particularly love pages 13 and 25, but more seriously, it's cool to see the amount of attention they paid to details and keeping the layout unambiguous.


Looks like a minor revision of IBM flowcharts, although a lot of attention was paid to establishing good stylistic/pragmatic rules.

Graphical perspectives of software systems

Single topic? How about what are the fundamental perspectives for expressing software systems graphically? There has been countless versions of data-flow languages over the years and starting with IBM "flowcharts," with less attempts to express control-flow such as DARKON. But, if one stood back really far, could there be a fundamental set of graphical schemata that could interact to express all software? If so, how would these 'expressions' be 'represented' as source?

maybe a narrower single topic?

Sandy Klausner: How about what are the fundamental perspectives for expressing software systems graphically?

Scope here seems too broad. By analogy, in novel Zen and the Art of Motorcycle Maintenance, a rhetoric teacher aims to help students write essays by narrowing topics. Where they would wallow if writing about "America", students had no problem writing about a thumb on one of their own hands. The teacher assigns one student a task of writing about one brick (a specific one) on the face of the opera building. The more narrow the scope, the more free we are to say exactly what we see, and do so accurately, without need to agree with what others have said before.

In Von Neumann architectures we use now, there's something graph-like happening in circuits implementing semantics of our code. If an application also has a graph-like structure, it's not totally alien to use a graph oriented language because some of the structure is preserved across levels, and doesn't become totally wrong somewhere. But if an app had very different semantics -- say a fancy mathematical schema -- there might be a poor fit with graphs, and the impedance mismatch might lose compared to another approach. So I guess expressing software systems graphically is sometimes an uphill battle. Plans with universal scope get into trouble.

In the early 80's, I started spending a lot of time in library stacks researching what it meant to represent information graphically -- suppose you wanted to invent an artificial graphical language mediated by computers? -- and I stopped going to classes until I flunked out, because classes were dull in comparison. (I went back and finished a CS degree a few years later.) When folks asked me what I was doing and I explained, they screwed up their faces and asked, "Are you a grad student?" No, that would have been better. Immediately applicable research, close to what happens now, is a safer course to pursue.

Beyond Von Neumann architecture

Von Neumann architecture has carried the industry now for over 65 years, but is increasing challenged to effective program multicore and manycore technologies. We urgently need to collectively understand core semantics above an individual processing unit and question if text programming languages can effectively adapt to meet these exponential parallel and concurrency complexities. Perhaps, a graphical language approach could better coordinate the necessary multiple programming paradigms through a finite set of system perspectives (schemata)?

As you experienced in your early career, taking an unbeated path that does not produce applicable research results and is certainly risky, but perhaps the industry just hasn't invested sufficient resources into alternative ways to bind our 'minds' to 'machines' that could possibily take us beyond Von Neumann architecture. I would be glad to transform this 'abstract' topic out into a 'concrete' architectural discussion where universal scope of applicability might just not "get into trouble."


The opposite occured for logic design for ASICs and FPGAs.

In the '80s - early '90s, designs were often described as electric circuits with gates, wires, blocks, using graphical editors.

Nowadays, the prevalent method is to use text-based langages (VHDL, Verilog, SystemC...) where every assignment or "paragraph" (=process) execute concurrently.

The transition must have been difficult for people used to connect actual hardware components on boards, but text based langages eventually won for many reasons, I could cite :
- Faster entry, code reuse, copy/pasting.
- Information density
- Easier manipulation of variable width and repeated constructs (busses, vectors...)
- Higher level abstractions, for example IF/ELSE constructs.
- Portable, non proprietary file format

To compete efficiently against text, the graphical methods needed to offer serveral representations : Connected blocks for high level connexions, state diagrams for state machines, separate tables for constant arrays... Much efforts for catching up with plain text.

Text entry can be used even for _extremely_ concurrent programs.

My own experience...

Graphical programming languages were one of my professors' pet research area in college, so for a semester he had us doing projects in one "graphical language" after another.

My professor got a different idea, but my own conclusion was that the visual systems in our brains, while capable of taking in large amounts of detail, are very concrete and not well-suited to deal with abstractions, whereas the language parts of our brains are well-suited for dealing with abstractions, and this is probably why our successful programming systems have been language-based rather than chart-based.

I remember just two of these "languages" that were marginally successful.

There was a system equivalent-by-design to Pascal, that basically rendered code as a flow-chart, with different parts of the flow-chart having a different background color to represent variable scopes. Procedure calls were represented by lines decorated with argument values, and returns represented by lines decorated with return (& var) values.

I found it usable, but it presented no cognitive advantage over Pascal. Indeed, in order to use it, I found myself "thinking" the Pascal code, using the language parts of my brain rather than the visual.

There also was a game wherein you programmed a software "robot" to navigate a hazardous virtual environment by laying out grids of tiles and drawing lines between them. It had two kinds of lines joining tiles, one representing flowchart-like flow of control and the other representing functional flow of information. Whenever a "value box" (variable) changed, the transitive closure of all the information lines that flowed from it would change instantly. The point-of-control followed the control lines and moved by only one tile per step. There was an abstraction mechanism where you could use separate grids of tiles as the definition for a new tile type, effectively making subroutine calls.

It worked reasonably well and was entertaining, although the problem domain was specialized. It was the only language we encountered that semester where the non-programmers (a "control group" of psych 104 students) did nearly as well as the programmers - although that might have been because the game was engaging. Watching the little robot on the screen run the mazes and deal with the traps and creatures was entertaining and gave people instant feedback about how the programs were working or not working, so that was a huge interface win.

Ordinary people doing well with it, though, got me to look at this latter system really hard. I derived a toy programming language from it and compared it to existing languages. And found it very, very limited by comparison, but it had one interesting feature. The cascade of information when a variable changed was pure-functional. So the "toy" programming language derived from it wound up having "functions" and "procedures" that were very fundamentally different. "Functions" were pure-and-simple expressions in terms of inputs, with no notion of flow-of-control or sequence. "Procedures" on the other hand had flow-of-control and sequence and possibly side-effects. Functions could be called by procedures or functions, whereas procedures could only be called by other procedures.

Anyway, that's what I remember about visual languages.


Beyond Von Neumann architecture

Ray, regarding effectively dealing with abstractions, one could argue that words, concatenated into acronyms, may not be the optimal method to capture semantics of a set of concepts, especially shared across natural languages. In other words, system expression challenges go well beyond binding our mental concepts to machines, but must also bind knowledge between people on a global basis. This is especially true for achieving much higher levels of software reuse. With this line of reasoning, perhaps a ‘systems language’ could augment text to enhance our collective ability to directly capture, map and execute human intent represented in a machine through multiple interconnected graphical perspectives. By fully applying visual computing to software engineering itself, I contend that virtually all systems might be captured across seven fundamental schemata as follows:

Topic Map - Graph substrate composed of topics (reified concepts) and their referenced resources

Collaboration - Directory of contextualized software components and their dependencies with other communities

Genealogy - Ontology schema of abstract concepts and concrete templates

Composition - Parts-of-a-whole schema between collections of objects/records

Network - Links between objects that carry messages

Image - Graphical user interface

Behavior - Process-flow, control-flow, and finite state machine (FSM)

A system compiled from such knowledge capture could be processed under two models of computation (MoC); Transaction for dealing with the complexity of parallel programming and real time for dealing with the complexity of preemptive and concurrent programming.

Functions and procedures


As an aside, the manner in which functions and procedures interacted in the 'toy' programming language should be a fundamental constraint built into any systems language. However, there are special cases where abstract state changes could be declared in a function, but the development environment should automatically detect and make such constructs explicit to the developer(s). Bertrand Meyer, captured this intrinsic constraint in his Eiffel programming language. Too bad that it wasn't picked up by the mainstream language inventors.