Giant list of visual PLs

Visual Programming Languages is a huge collection of VPLs.

Personally, I find those the most interesting that are closest to text as we know it, but extend it beyond a plain list of characters. Charles Simonyi's projectional editor is an example of this.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Artificial languages

Hmm, I'm not sure. It seems there's a kind of "uncanny valley" with artificial textual languages including but not limited to programming languages, but also traditional math - algebra, logic etc. All of these seem to look gruesome once you have more than a couple of lines. I can read hundreds of pages of natural language fairly effortlessly, but it's tedious and error prone to interpret even one page of artificial language.

Showcases a computer developed in Minecraft

Sweetness.

I used to think that visual programming

I used to think that visual programming would be really neat, but I don't think so any more. This list demonstrated why.

True, a few of them applied in specific domains that are so connection-heavy or flow-heavy that they offered an advantage (eg TextIt). Those seemed interesting.

I won't pick on the ones that were meant to teach kids programming, either.

But most of these languages seem to trace a lifepath from "Wouldn't it be neat to program without writing code?" to "Are we missing X? We'll add another icon for it (or another visual idiom for it)" to "Rats, writing visual code is just like writing code except we're missing the usual text tools"

Still, an interesting collection, and thank you for posting it.

neat but big UI footprint

I used to think that visual programming would be really neat, but

When I helped with one in the early 90's (not among those listed), I felt that way too when I started. Later I felt code in 2D grids used too much UI real estate, compared to text, for folks facile at using text-based programming languages. I converted examples from the graphical version to a plausible text encoding, which was invariably quite small in comparison.

The following I have said many times since. If the graph of relationships in code and data is its meaning, text-based programmers (good ones) can see the implicit graph despite the text format — at least up until sheer volume of detail overwhelms. Graphical languages are irritatingly not very dense. But they make the graph explicit for folks who otherwise have trouble seeing how data flows.

Note the graph is quite hard to see in some apps written in text-based languages, and there diagrams help, even if just in docs. In message-based protocols with async behavior, the sequence diagram is typically invisible at the programming language level in text (because usually no one generates the code from a version where you can see the graph). Lately I've been reverse-engineering sequence diagrams from IPsec key exchange logs of pfkey messages passed between multiple processes, just to get a holistic view of cause and effect, as context for new changes. Interaction of parts from static config and dynamic negotiation get obfuscated when no single process sees all of a chain of events.

When a queued message doesn't say who sent it, you get control flow laundering that can hide distributed behavior. (Yes, pfkey messages have a pid in sadb_msg headers, but this is not enough when there are multiple embedded actors per OS process.) This is something for PL devs to think about when supporting async and/or concurrent language mechanisms.

UI real estate in the future

I agree that graphical languages are not very 'dense' and this is a problem in modern UI. It often seems wasteful to me to use my little monitors (or even my pair of big ones) on managing graphical code.

But I also believe we're on the precipice of a UI revolution, involving augmented reality (via devices such as meta's glasses and thalmic's myo).

Some time soon, screen real-estate will simply not be an issue. Every surface will be available. Further, at the very same time, keyboards will become much less convenient as a control interface. Virtual keyboards won't have the same haptics. We can emulate a chorded keyboard easily enough with a sensitive device like a myo (e.g. tapping out against a surface). We can also use simple pen and paper as an input format to aide in creation of a virtual artifact, though that's equally good for diagrams and ad-hoc notations. But physical or gameplay-like programming metaphors - wires, constructs, inventories, equipment (tools, lenses), explicit navigation, training robots (programming by example), etc. - will be a lot more natural.

I think that visual programming, and presenting users with inventories of visually distinguished reusable components will become a very significant aspect of programming. One might quickly filter through a library of components using an ad-hoc mix of gestures, words/speech, contextual search (match interfaces on existing components) and visual search of virtual shelves or bins.

Speech/NLP and machine

Speech/NLP and machine learning is going to play a larger role long before that happens. It could be that programming becomes just another conversation you have with your computer.

We'll see. I imagine speech

We'll see. I imagine speech will be used to help access resources and components (as I mentioned). But I doubt speech and NLP will play a much larger role until we have (efficient) strong AI with large context that can make sense of the half-baked, ill-thought, and imprecise concepts we speak naturally. And I don't believe that's just around the corner.

I do believe machine learning will play a role, e.g. in programming by example and in optimizing and stabilizing constrained forms of program search.

We don't need strong AI to

We don't need strong AI to build slightly stronger dialogue systems. We are seeing it with Siri, Now, and Cortana...it feels like only a matter of time before it is applied to programming.

Hmmm

Some time soon, screen real-estate will simply not be an issue. Every surface will be available.

Do you really think screen real-estate is a significant part of the problem of low density graphical languages? Aren't we mostly to the point of diminishing returns already with screen real-estate?

Do you really think screen

Do you really think screen real-estate is a significant part of the problem of low density graphical languages?

Yes. I've explained this in greater detail at the augmented-programming google group. Visual programming is hindered both by the challenge of keeping both component inventories (or libraries) and a sufficient work space on screen, and also by various low-bandwidth user-input and navigation issues (point and click isn't very efficient).

Aren't we mostly to the point of diminishing returns already with screen real-estate?

No, not at all.

Quantitative differences add up to qualitative differences. But it is very typical for this to be non-linear with many plateaus and activation thresholds.

We might be reaching a point of diminishing returns for conventional uses of screens (such as display of text or boxed video). But an activation threshold creates opportunities for new uses that before weren't very viable - e.g. virtual widgets projected onto surfaces, head's up displays, labeling and highlighting, automatic translation, immersion into multi-perspective video and games as an experience. And, I believe, visual programming.

The returns will change qualitatively, but they won't diminish for quite some time. After we have AR, it will become very valuable to project smaller, further, more precisely, and wider (peripheral vision), and with ever lower latencies and longer battery life.

Visual programming

When I think about they kind of visual programming you're describing, I can only ever think of this. I just can't help it.

Simulink and Labview

In control engineering visual languages work out. I never really figured out why; probably it is because the components replace analog building blocks and people are comfortable with that mode of creating stuff. But I never picked up on control engineering lacking the EE background.

There may be other application domains where visual programming works, I wouldn't know, but I doubt you're going to build a C compiler this way.

The control/signal-processing DSL

The "analog building blocks" really aren't. The classic building blocks of control system design and signal processing are visual abstractions of operations that could be implemented in analog, digitally, or as a mechanical system of some kind. You can think of the block diagram language as an implementation-independent DSL for describing any kind of system that manipulates signals:

  • Primitives: Signals (time-ordered streams of values) + Blocks (Addition, Scaling, Integration, Differentiation)
  • Means of Combination: using a Signal to connect a Block output to a Block input
  • Means of Abstraction: aggregating a network of Blocks into a single Block characterized by a transfer function (e.g. a feedback loop becomes the single Block with transfer function G/(1+GH)).

That particular DSL has stuck around a long time because it turns out to be a convenient way to think about how signals are manipulated as they flow through the system--both the designed control system, and the real-world system being controlled. While you can express the same ideas by just writing out a few lines of Laplace domain equations, it's a lot easier to see a feedback loop (or nested feedback loops), or understand all of the outputs affected by a particular input, by looking at a diagram.

Having said all of that, it's not uncommon to fall back on text for describing complex input/output transformations rather than trying to translate them into collections of block diagram primitives. This amounts to adding new, problem-specific primitives to the language. The simplest form of this is to create a new block with a textually-defined transfer function. More complex operations might be defined algorithmically (Simulink provides the capability to create "S-function" blocks that have a behavior defined by an internal Matlab function).

That much I know

I get the theory up to that point too. You should understand my remark as: "It remains remarkable that visual languages have worked out in control theory and not in other domains."

Except for that they started with analog building blocks I just don't really see why. And you would expect them to move past historical control theory at some point. But, as far as I know, they (usually?) don't. It solves their problem.

Not much control flow

A suggestion would be this kind of signal processing does not have much control flow, it's all filter this, sum that etc...

A category-theoretical treatment of signal-flow graphs

John Baez and Brendan Fong have been working on a category-theoretical/compositional treatment of the signal-processing visual language, based on the category FinRelk, viewed as symmetric monoidal with direct sum as the tensor product. Among other things, this means that the graphical language of signal flow graphs is recovered as the usual sort of string diagrams[1] for this monoidal category. See the following blog posts for links to videos, slide decks and papers: Overview and 'Network Theory I'.

Personally, I agree with Keean Schupke that the lack of control flow makes visual treatment quite a bit easier.

[1] John Baez's 'Physics, Topology, Logic, and Computation: a Rosetta Stone' is an accessible introduction to monoidal categories and string diagrams, including applications to logic and computation.

Partial ordering

Though I've thought about the visual route, it never occurred to me to think of it as not writing code; just a syntactic feature that might be useful for expressing smallish partial orderings without forcing them into strictly linear text.

I think visual programming

I think visual programming will be most useful in providing metaphors highlighting important connections in symbolic/textual code. For instance, visually lining up yield points in symmetric coroutines could be pretty useful, particularly for discovery.