The future of live programming

I'm at a crossroads in my research...my APX demo at strangeloop went well, and I learned a lot about live programming in the process. Hitting a target with a water hose and Hancock's steady frame (the visual information to guide one's aim) are very much key, and must be the focus of this work. Bret Victor's ideas in his learnable programming essay are nice and useful, but Chris Hancock provides the underlying reason on why they are so. In particular, scrubbing is a huge complement to live feedback, but so is direct manipulation when it can be managed, as well as the ability to abstract from a concrete implementation with concrete values into a general implementation. Its all about guided aiming.

On the one hand, what we have now is already kind of useful: at least having access to the execution while editing so you can just inspect anything you want at any time is a big advancement over dead programming. But on the other hand, the more powerful features, like scrubbing, are very limited to constants and simple abstractions that are easily placed in a continuum. They are not useful in general: abstractions are not easily placed in a continuum, and anyways have effects and outputs that are not intrinsically visual. If programming is to be revolutionized, we have to break out of that "2D box" into general usefulness.

So there is a lot of work can provide some inspiration on where to go now:

  • Alexander Repenning's Conversational Programming. Basically, what if code completion could be augmented by execution context? Then you could more easily go from the "program you have" to the "program you want." It is still very visual, but it doesn't seem to be a very big stretch to make it general, leading us to...
  • Joel Galenson's CodeHint system, which provides the results of a Java method call in the code completion call, allowing for more informed selection.
  • Gilad Bracha also argues that methods should be made live during programming, that is development should be informed by live executions. I used to think that this wouldn't be good enough, but I'm now coming around to this view, at least partially (incremental re-execution can make for better feedback loops, managed time makes reactivity easier and allows for time travel/strobing, but none of that is necessary...). Heck, if you have values, you don't even need types to get good code completion, you can get better code completion even since you can inspect a specific run-time state!
  • Conal Elliot's tangible functional programming. Automatically creates tangible interfaces for values (including functions), but they retain their abstractness.

One aspect that is kind of cool in APX is the ability to manipulate certain values in the visual view, and use that to create abstract-specific graphical interfaces that are used during programming; e.g. adjust the velocity of a ball as a vector directly in the editor. Nice, but I think this notion of graphical programmer interfaces (GPIs) can be applied more generally with the caveat that these interfaces feed on specific execution contexts. Text doesn't go away as text is abstract and concise, which is necessary in building scalable programs. But having graphical assists while editing code and debugging is quite useful.

So here are some driving examples to guide the next step that I've thought about so far:

  • Consider writing a lexer. Given a variable x whose value is 'f', you want to build a condition that is true on 'f'. You bring up the code completion menu for members on x, and specify that you want only those members who evaluate to 'true', leaving you with a smaller set of members to choose from, including "isNonSpace", "isAlphaNumeric", and "isAlpha".
  • Still writing a lexer, you have the string str whose value is "foo+ bar", you bring up code completion on str, select "count". Now you go to fill in count's argument, a predicate. Depending on what predicate you chose, count returns a different value, if you scrub the desired value of count to 3, you get fewer choices for this predicate, including "isAlphaNumeric" and "isAlpha". (Alternatively we could figure out how to do this in one action, since you might not want to commit to "count" until you figured out it had an argument you could use)
  • Consider accessing the element of a dictionary, code completion for the key can show the literal values of the keys (and only those keys actually in the map), along with the values they evaluate to. Selecting a desired key, you can then "pick" some abstract expression that meets that key's value, thus making the code more abstract (we are programming against specific execution contexts, but we are still programming for the abstract!).
  • Substring can allow you to select directly the part of the string that you want from an input string. You can then abstract those indices if necessary by finding expressions that compute them.
  • As a simplification of the type state problem, code completion on a file shows "open" if the file's execution state is not open, or "close" if the file's execution state is "open". This is of course not "safe" for all execution contexts but it does allow one execution context to narrow the options (and it has to be safe for at least that context).
  • When opening a file from a string, a file picker can come up to specify that string (this can be abstracted later as desired).
  • Transforming a text file into something the program can use can be done with the aid of a GPI that knows what the text file is. So say you get:
    1.23,  45.6, 7.0
    5.734, 56.1, 0.66
    10,    0.8,  1.2
    

    The GPI can allow you to split it into rows via new lines (just select a visible NL character), split the rows into columns via the comma (select the comma), and it knows that the toDouble API will work for any column (and other string conversions won't work). This leads to a very efficient and safe file transformation process (with the caveat that it applies to only one execution context).

This really needs to work in two ways. First, a GPI can show you where you can go, and provide feedback about the consequences of going there. Second, we often need to work backwards from a concrete value to come up with an abstract expression that produces it. E.g. x - 30 or x / 2 where x is 60. The latter is much more difficult since it leaves the API open (the former has committed to an API), and their could be multiple APIs involved! For example, let's say I know I need the number 42 for the specific execution context, but to get there, I need to plug "foo" into dictionary "d" and access the bar field of that. Ugh. Even worse if we need to come up with a general arithmetic expression with very continuous inputs! Still, the connections needed to abstract a concrete value will often be simple (e.g. I need "42", x is just "42", use x), so perhaps it is just something used but isn't always usable. Also, there is not a good story yet for defining abstractions in a live programming context...e.g. new kinds of objects, ones that can remember some state to be used somewhere remote, or a new method. Perhaps this is also part of the create by abstracting process, I'm not sure.

(Ok, I wrote much more than expected, apologies!)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Is it too low level?

I suspect that the tools being examined here are too low level: talking about examining values live for example, is useful and an extension of the old debugging technology, but it seems to me to miss the point.

Today we have strong static type systems. If we need any "liveness" it needs to be much more abstract than applying methods to values: for example, to see how the types fit together. Something like this was done for Ocaml exception handling in emacs if I recall: one could visualise the "scope" of an exception to see how far it could be thrown.

Or lets suppose you have a C++ template, and try to apply it to some class type which fails to provide all the required methods, one could visually display the methods required and supplied in green, those missing from the argument in red, and those provided by the argument that aren't used in yellow.

So perhaps one way forward is to use the "value level" liveness tools inside a compiler, where the higher level abstractions of the language such as types are represented as values. I would note, basic support for this is already available in, for example, Microsoft's Visual Studio.

Also note your comment about live development of abstractions may have missed a tool I have heard about in which one starts with a live categorical construction and then factors it. Perhaps someone else where can provide a reference.

Finally, I should comment that my Felix system was designed to provide a "chips and wires" programming model. Some constructions are intuitive, such as pipelines. But more complex constructions are very laborious to do because you have to construct "chips" and "wires" and then plug them together by using active sequential code and using names for the connection points (ends of the wires and pins on the chips the wires connect to).

At least in that model, a "live" programming method would be natural. You would be able to graphically pick up components, plug them together, plug in a data source, and watch the output.

So at least part of the problem with "live programming" is probably that trying to make crap like Java "live" is not going to work, because the programming model is just not suitable.

Programming is hard mentally

Programming is hard mentally because it is too abstract, which is also why it is so powerful. So programming with concrete executions can do a lot to lessen that burden, even if we run the risk of creating non-abstract solutions (but we can provide tools for abstracting as well so...).

I'm weary to consider augmenting abstract reasoning in some tangible way just because it doesn't immediately make sense how that works. What does it mean to "tangibly reason about type compatibility"? It sounds like we might be flinging venn diagrams around or something :) Specific execution contexts already give you a lot.

Useful live programming requires reconsidering the programming model, and at least API design, to work. Patch and wire dataflow is a great example of a programming model that has been easy to make live, but hasn't been very capable abstraction wise. By augmenting a language with full abstraction capabilities with live feedback, we can maybe get the best of both worlds.

walk before you run?

I've been appreciating the difficulty of visual analytics, and wonder if it's a prerequisite for generalized live programming. Basically, we spend a lot of time building interfaces for understanding and manipulating fairly shallow relationships (~indexed tables) and simple operators over them (SQL, maybe eventually datalog). However, despite our love for abstraction, composition, first-class values, and control operators, have to be incredibly careful to add them in -- and often, haven't.

So, I'm wondering in terms of say tangible functional programming / lambda calculus, what are some of the most basic forms that need to be truly solved first. E.g., even before arrows and loops, ADTs and conditionals?

So, I'm wondering in terms

So, I'm wondering in terms of say tangible functional programming / lambda calculus, what are some of the most basic forms that need to be truly solved first. E.g., even before arrows and loops, ADTs and conditionals?

It is always possible to come up with a core language specifically for this purpose right? Why bother with the lambda calculus when you can design a new language specifically designed to support visual analytics or live programming? In my case, is it possible to design a language of knobs where small change in the program's code = small changes in the program's output? Of course, there are things like neural nets, but we typically want user-defined abstractions...

I guess it really depends on what level you want to start at. By walking first, you can do it right but you are created a new world. By looking at higher levels of abstraction (e.g. procedures in existing paradigms), you can make some impact and gain some experience without creating a new world that seems utterly bizarre to everyone.

why lambda calculus

Yes, what I mean is the future will benefit from trying to do it in a principled and incremental way. I'm positing that, before we can do something as complicated as lambdas, there's a lot of foundational questions of (1) "simple" features in isolation like nested data types and fixed (second-class) operators like filters and (2) how they compose. Basically, most features we take for granted for building most useful things seem.. hard.

Leaping (as many are) to modern languages where the operational semantics are exposed and there are all sorts of operators makes sense because that's a good chunk of the code out there. However, I'm increasingly viewing this as a random walk with lots of dead ends. (That's fine -- Darwinism wins in practice -- but eyes wide open and all that.)

Perceptrons provide a nice

Perceptrons provide a nice analogy to the low level case. They appear at first as new kinds of NAND gates, just another model of computing, but what makes them special is that their functions can be learned via training. We might not ever be able to do lambdas in that case, they might not just work! But there are an infinite number of computational models out there...one that might provide something like procedural abstraction while still being trainable (for the ML case).

For my problem, is it possible to come up with a computational model that is designed for the human vs. the computer or logic? My hypothesis is that continuity in the encoding is key (small change in code = small change in result, basically, can we make something that is friendly to humans doing gradient descent manually?), but this is probably not achievable in practice (the use of an abstraction is never gradual). So rather than start searching for a solution at a low level, perhaps we start from a high level, make modifications to the abstractions that are useful, then generalize to a low level when we can. This is a general problem solving technique: we can start form the bottom and construct our way up, or we can start from the top, make changes, and destruct our way down, usually we wind up doing both!

Design is often a random walk, there is no science to really guide you, you just try things, make observations, try new things based on those observations, repeat, and hopefully you come across something that works.

Programming by example

I think Programming by Example (PbE) is an important keyword, given the kind of interactions you describe.

One thing that is not clear from reading your descriptions alone (I'm generally not familiar with the works mentioned) is: what is the user interface to produce and retrieve information about the runtime execution contexts you are discussing? A first natural idea is to write unit tests on the side, and those continuously executed to guide code inference as you suggest, but you also seem interested in other approaches to constrain the code design space (eg. the typestate example) that I suppose would have a different presentation. Contracts? Predicates on execution traces?

Should the information about the runtime context be presented separately from the code (even if fairly close), as unit tests usually are, or be woven inside the code, as contracts/assertions usually are? I don't quite see how you could reliably weave into the code information about specific execution contexts (eg. "assume (x = 'f')", "expect (out = 'F')"), usually we weave information covering the whole desired input domain. Talking about specific runs inside the code itself might (1) scale poorly to talking about several runs at once and (2) impose an additional reasoning burden on the programmer (while editing the code I have to think about what the code does *in general*, but also about how it specializes to this specific case I'm adding annotations about).

It is kind of programming by

It is kind of programming by example, and it is definitely related to Jonathan's work on "example centric programming". I see it more as a process of starting concrete, and then abstracting Bret Victor's style. So the unit test is the example, and you write code that computes a specific Z from a specific X and Y where the goal is to have it work for non-specific bindings.

I'm not sure how much types really help. I mean, I need more context about 10 than just that it is 10....we need to know how 10 will be used and how it was produced. Types can give us some of that, but they have limitations. We can also just look at the code and see how 10 was produced and where 10 is going. So something richer than types that can work with specific values...like contracts. I guess in the future I'd like to look at something more powerful than types that can augment values with more semantic information.

Should the information about the runtime context be presented separately from the code (even if fairly close), as unit tests usually are, or be woven inside the code, as contracts/assertions usually are?

Runtime context can be projected onto abstract code via the IDE. It is impossible to think about such systems without substantial tooling. In APX now, code can be dynamically associated with a specific execution context...but we need to do better at indicating what execution context is currently being projected on code.