Continuous feedback in PL

One of the tricks to creating a good user interface is to provide a continuous feedback loop between the user and the system. This can be done in many ways...live scrolling, live "scrubbing" of parameters via a slider, combo list, or whatever. Bringing this to PL is possible to an extent, but is limited by code that is basically non-continuous (to be described later!).

In math, we have the notion of a continuous function: a function whose outputs change a little bit when its inputs change a little bit. It is a prerequisite that the inputs also be able to "change a little bit"; i.e. are themselves continuous. This is a bit harsh, actually, and it is interesting to also consider discrete values that have some kind of order in a space (e.g. 1 ... 2) where going to an adjacent value is considered as the "small change." In any case, a continuous function can be augmented with excellent continuous feedback if the user is viewing its output while scrubbing its input (e.g. see Conal Elliott's Eros).

So we could treat code as a value "x" that is interpreted by a function "f", so "f(x)" gives us the output of the program defined by "x". If "x" was somehow a continuous value (or discrete value in an ordered continuum), and "f" was a continuous function, then we'd have the ultimate live programming environments: programmers simply move "x" around it its space according to how "f(x)" approaches or diverges from the desired output.

Of course, this isn't possible in general: code is hardly continuous in its value or interpretation. For small fragments of code, we can totally do this: e.g. scrubbing the position or color of a shape in a program. We can even scrub functions to a certain extent (e.g. shapes or colors in APX), but it is all quite limited to simply graphical/physical logic. Accordingly, most the successful demos of live programming are simple UI examples. I wonder also if this is the real bottleneck that visual programming runs into: that it is unable to provide reasonable feedback for many kinds of programming tasks, not because of (or at least, not just because of) real estate issues, but because the abstractions used are not amenable to continuous feedback.

The way forward is clear but perhaps difficult: we need to design programming languages and abstractions that are more continuous. Not just for graphical/physical computations, but for everything. One vague idea here is to distinguish between abstraction selection, a discrete non-continuous process, and abstraction configuration, which should be made as continuous as possible. Any other thoughts?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

"instrumentability" not continuity

"Continuity" seems implausible. Circumstantial evidence: (a) We build digital computation out of non-linear circuit components; (b) interesting cellular automata (and their "chaotic" responses to minor perturbations).

Perhaps what you are reaching for is closer to a concept of "instrumentability": the idea that at all scales (ideally) a program should be built from components that are easily and usefully connected to a real time, interactive "scope"?

Continuity is implausible at

Continuity is implausible at the macro level but is sometimes plausible at the micro level; I wonder if we can just push that level up a bit more than it is today, given that we know the property is very useful.

Instrumentablity seems to be a new word, but I get what you mean. There is obviously huge value in being able to add (and remove!) pieces to a program without breaking it and allowing it to continue executing. Functional programming would appear to have an advantage in this regard (functions compose, imperative code requires setup).

We've done a bunch of work

We've done a bunch of work in Eve recently on resilient compiling/executing. Rather than blow up on compiler errors we freeze the state of views which have errors and continue executing everything else. Once we have the IO controls fleshed out we will probably also suspend IO on errors too to avoid accidentally firing the missiles. I can link to code/docs tomorrow but there isn't actually much too it - it's much easier to repair data-flow than control-flow.

I've been doing this since

I've been doing this since 2007, at least. You are right: data-flow is easier, which is why I did it for SuperGlue first (see this video from 2007). But control flow is not impossible to deal with, nor is it particularly very hard. APX is a good example of that: it is resilient to parse, type, and execution errors; it does not roll back side effects when an error state is detected, it handles IO relatively well, we can compare features at FpW StrangeLoop :)

But technology isn't really the problem, it requires work for sure, but it is not our blocker. To quote Hancock again:

And yet, if making a live programming environment were simply a matter of adding “continuous feedback,” there would surely be many more live programming languages than we now have. As a thought experiment, imagine taking Logo (or Java, or BASIC, or LISP) as is, and attempting to build a live programming environment around it. Some parts of the problem are hard but solvable, e.g. keeping track of which pieces of code have been successfully parsed and are ready to run, and which ones haven’t. Others just don’t make sense: what does it mean for a line of Java or C, say a = a + 1, to start working as soon as I put it in? Perhaps I could test the line right away—many programming environments allow that. But the line doesn’t make much sense in isolation, and in the context of the whole program, it probably isn’t time to run it. In a sense, even if the programming environment is live, the code itself is not.

Your languages semantics have to support continuous feedback, which is why data-flow languages are commonly used: its not really a technical problem...there are plenty of smart programmers out there, but it is a design problem. So live environments are almost always data-flow based, but there are problems with expressiveness with that. The hacky abstractions that we often come up with to make data flow more expressive often ruin its live programmability feature: they allow you to encode more, but you lose the feedback, the ability to water hose your code to the desired result, that comes with live programming.

Continuous ~= Small Deltas

I see this problem a lot when I'm writing transformations for program graphs, and most days I would sign a blank cheque for a live programming environment. This is a domain (as I'm sure you're familiar) in which values are discrete, but quite complex. Interpreting a value generally means mapping it through some opaque and non-continuous g(f(x)). Normally this is a tool like Graphviz that attempts to visualise the value.

A normal approach to debugging a transformation is to generate a series of program graphs, preferably as close to each other as possible: i.e. applying the smallest step transformation between them. The idea here is to minimise the change in x to hope for a small enough change in f(x) to recognise. My standard approach is to compute a delta/diff between the two graphs and then use colour in the output to highlight that structure within the output graph.

I would read your comments overall that "continuous" is a property that is close to "the thing that makes live programming work", but not an exact match. It is certainly true that if f(x) is continuous then it is easier to gain an understanding of how it behaves by interacting with it. I would hazard a guess that when f(x) is defined over a discrete and non-continuous (non-numerical?) domain the relevant property is something like "we can map f(x) through a non-monotonic measure g()", where g does not necessarily produce an output that behaves like a size, but rather the deltas between adjacent measurements are "small". Here it seems intuitive that "small" matches "easy to understand for the programmer", although demonstrating that experimentally would appear to the trick.

and most days I would sign a

and most days I would sign a blank cheque for a live programming environment.

I should start a kickstarter then :)

I would read your comments overall that "continuous" is a property that is close to "the thing that makes live programming work", but not an exact match.

True, it is not an exact match, but a useful approximation. It is sort of the "ah-ha" moment when figuring out how to generalize live programming to a wider set of problems. Right now it is very narrow.

"x" doesn't have to be continuous. It just has to be scrubbable. So if there are only 10 choices for "x", even if they are very discrete, that is fine; I can just try them all! But if there are a 100 or 1000 choices for x, you want some kind of ordering for them based on behavior that is reflected in the output.

At the end of the end, I just want to figure out how to build a type checker via scrubbing. I know its cliche, but its totally non-obvious how that could be done but it seems achievable (since as you mention, we can visualize graphs at least, we just can't write them very well).

Quite a juicy example

I like the goal of building a type checker via scrubbing, it seems to be quite a juicy and tempting fruit, although maybe not one that hangs very low.

What kind of dials and knobs do you see the user twisting? One that seems kind of easy in this case is time: judgements as a series of reduxes that the user can play back and forth in. There seems to be at least one more difficult dial that would explore "bits of source code like this one". The "like this one" part being the property that needs to be quite smooth : small deltas on the input parse tree.

I'm not sure what the dials

I'm not sure what the dials are yet. Obviously they aren't numbers and are more like selected functions.

findability

Perhaps another way of phrasing it is that a hypothetical search would need to examine no more than 10 values to find an input close enough to the one we want? Ordered "x"s can be binary searched and continuous ones have a range of values that are close enough.

This sounds right.

This sounds right.

Expanding on this

After reading "Real-Time Programming and the Big Ideas of Computational Literacy" several months ago I got excited to the point where I pondered how dead programming is possible.

Naur's idea of programming as theory building is that when someone wishes to modify a program they need to a theory of how it will respond to various changes.

If programming is theory building, then anything that makes it easier to develop a theory makes it faster to develop the program. To me the central cool part of live programming is learnability. Qualities that enhance learnability are continuity, lack of delay and simplicity of relationship between input and output. By pairing a programmer's action immediately with its result, the programmer can develop a theory of changes much more quickly. With time a person learns the shape of the output-space and how to move through it with their actions, reducing the need for feedback. (I think "steady frames" are the same idea as more learnable relationships)

So much of my time programming is spent trying to understand what is going on, that a language optimized for learnability would be a godsend. Maybe computational learning theory has a way to measure learning effort.

"Learnability" is a tricky

"Learnability" is a tricky word, as recent hackernews discussions (on Come Alive, I think) can attest to. I think timely feedback benefits learnability, but it benefits other things also. Bret tweeted something about wanting to reinvent the programming experience, I definitely can agree with that.

Theory is also one of those words I can't grock very well. How do people develop and manipulate theories in their head? Do we really mean hypotheses? And really, I have no idea what goes on in people's heads, I'm just looking for an experience that increases efficiency, and creates that good adrenaline rush in your head (the blue happy fluid feeling vs. the red stressful lost feeling). There are definitely experimental cognitive psychologists who understand this better than me...I would love to frame this in terms of a law like Fitts' or Hick's law.

With time, a driver can learn how to navigate a city without GPS, but it is sure useful to still have it! I see live feedback as a fundamental augmentation of the programming experience that goes beyond learning, it shouldn't just be used as training wheels, though I guess that is a perfectly valid use case. But really, you are just training your head to do what the computer does for you during the learning process; i.e. you are learning how to do mental gymnastics when we really should be getting rid of that.

Bad analogy, I think. With

Bad analogy, I think.

With time, a driver can learn how to navigate a city without GPS, but it is sure useful to still have it!

Maybe it's just me, but I find GPS an active block to learning anything. With a map, one discovers where things are in relation to one another, with a GPS one follows instructions. Turn Left. Turn Right. Bear Left (Grrrr). Turn Around, You've Done It Wrong.

For me, the GPS is the "Wizard" of navigation. Fine if you only want to do it once, or you can't be bothered to think about it, but ultimately counter-productive and more expensive in terms of time sent learning to use the tool.

Live programming, to me, is more like fiddling with routes in with Google Maps; an aid to visualisation.

Not meant as an analogy. It

Not meant as an analogy. It was just an example...new technologies are never seen as training wheels to perfect the old way, they change the old way forever.

I agree that feedback isn't

I agree that feedback isn't just training wheels and the goal isn't to do without it. The goal is to understand the structure of the state-space so that I can move though it intelligently.

As an example of what I mean by learning, suppose that I am designing an ecosystem for a simulationist game like Dwarf Fortress. I would like to understand the choices enough to make an interesting one. I start by making a plant and letting it spread across the landscape. Scrubbing the propagation rate variable lets me see how different rates of spread look. I can add an herbivore and now I can vary consumption rate, birth rate, starvation rate. I can scrub through them until I find a setting that is stable. Or better yet I can can work with the difference between propagation rate and consumption rate and the difference between birth rate and starvation rate, because these have a more direct relationship with the population dynamics. Since an ecosystem with only two species is boring I add a predator, but after playing around I see that it is too easy for it to either starve faster than it can catch prey, or to hunt the prey to extinction and then starve. Does limiting their maximum population density by making them very territorial help? No because when there are too few prey they still hunt them to extinction, and when there are more they only reduce the population by a constant factor. What if I made territoriality proportional to hunger?
As I add species it gets more and more complex; some species associations are stable (like grass and an herbivore that suppresses woody shrubs) and some are unstable.

What I mean by theory, is an understanding of the dynamics of this ecosystem and how each species changes them that is good enough that I can move it towards dynamics I like. It is more like a mental model than a hypothesis.

The reason I focus on learnability is because I want a way to quantify effort.
People unavoidably get expectations about what is going to happen next and learning is the process of making those expectations match reality. Every time I am surprised and those expectations are broken that indicates something that I don't understand. Broken expectations are unpleasant and stressful, so when something behaves in a way that it easier to learn I feel much less lost and more relaxed.

Modularity

One of the key techniques, or even THE key technique, for large bodies of code is modularity: splitting the code up into small pieces such that each piece is specified against an interface and does not need to care about the rest of the program other than via the interface. How does that fit in with this? Perhaps rather than visualizing the whole program execution there needs to be some way to visualize the program execution from the perspective of a particular module. If the scope of the visualization is smaller there might be less of a need for continuity?

There is more to PL than

There is more to PL than just modularity. Think of editability as being orthogonal to and an unappreciated step sister of modularity and even composability.

Also, if we do figure out how to do this really well, where the code is even differentiable, then machine learning can more easily take over... We are really far away from that, so code modularity is still quite important. But its interesting to note what could make coding easier for humans would also make it easier for machines to code.

As for visualizing execution from a module perspective, that is a good idea, and is where Gilad Brach is going with his liveness work. But I think this makes state really difficult to work with, I mean, module execution is difficult to factor from program execution, and we spend much of our time at the top anyways debugging inter module interactions. To me, unit debugging is just testing a module with a smaller program. I wish there was a better story than that.

Take a look at...

...Chaudhuri, Galwani and Lublinerman's POPL 2010 paper Continuity Analysis of Programs.

They give a static analysis for determining the continuity of programs which is sound with respect to the epsilon-delta definition of continuity, and implement it with an SMT solver.

Their approach can handle things like loops, conditionals, assignments and discrete datatypes (such as booleans), and they have some very impressive examples like automatically establishing the contiuity of Djikstra's shortest-path algorithm, sorting routines, and knapsack algorithms.

I don't think they've considered applications to live programming, but (at least) Galwani has worked a lot on end-user programming and programming-by-example (eg, his work is the basis for the "flash fill" feature of Excel).

Quasi-related

Type-providers purports to enable "information-rich" programming, and tangible functional programming provides a spatial denotation for every term. I've thought on-and-off for awhile about what it means to provide a visually-interpretable component. E.g., what if type-providers also provided concrete instantiations, and we use those for tangible programming? In the spirit of your idea, I can imagine that each such term, for free, should also expose their control points..

I've been revisiting Eros

I've been revisiting Eros lately, and quite agree about tangible functional programming. As for type providers, code completion provides a different related type-directed search experience that I haven't been able to combine with scrubbing yet. But Repinning's conversational programming work definitely goes in that direction.