Bret Victor's Inventing on Priniciple

Bret Victor's talk has been making a huge splash on the net and we might as well talk about it here. The first part of the talk basically describes a lot of live programming ideas, some of which we have explored before but have never demoed quite as well as he has. Note that his demo is completely on cherry picked scenarios, there isn't a general system behind his work. Here are the interesting parts I'm recalling from memory:

  • The first part simply discusses live editing of a discrete program that creates a static image, which simply means you re-execute the program after each edit and observe its new output in real time. Easy enough to do in any language, as long as the program is small enough to execute quickly, edits effects on execution will appear seamless.
  • The second part deals with a stateful game, where he demonstrates time travel and projection to explore how changes in the code effect the result. Much more interesting, though I think for the demo he is just recording the input streaming and replaying the game (which is just one scene) with the recorded input each time time travel occurs backward (wit projection you just fiddle screen clearing).
  • Observing the effects of programming a quicksort function in real time by specifying sample input. Very similar to Flogo II's live text concept, but loop variable values are encoded as nice tables.
  • Finally, Bret does a custom animation through macro recording and by going back in time and recording other movements. The composite of all the recordings that occur in overlapping time segments then becomes the final animation.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Transcript

Transcript.

I think it's an excellent video. But, as Sean says, the examples are cherry picked. Playing with gravity and jump height, for example, won't scale very well to a larger stage where it would impact progress through earlier and later obstacles. So, most of the time, developers will need to adjust the hole.

Though, abstracting over time and easy rewind is still very useful.

But this is very impressive, and I do recommend watching the video. I've also enjoyed Bret Victor's works on worrydream, especially Magic Ink which argues that `interaction` should be a resource of last resort.

algorithm animation

Re: “Observing the effects of programming a quicksort function in real time by specifying sample input.”

That sounds vaguely like Baecker's work on “algorithm animation” as described, for example, in Software Visualization for Debugging by Ron Baecker, Chris DiGiano, and Aaron Marcus, Jan 26, 1997 (the CACM Special Issue on Debugging and Software Visualization — see the PDF version for pictures, if you have access).

As Sean notes, almost all of

As Sean notes, almost all of these examples have been explored by the PL community. I think the neat thing here is 1) how often the right visualization is domain-specific and 2) the amount of code/instrumentation it takes.

E.g., impact / taint / slicing / etc. analyses are well-known techniques that take away implementation burden, but it's up to the domain specialist to say how that should visually appear in a video game. Instead of trying to do it all in the language, what hooks should be provided to the developer?

As a general take away, I really like his view of space/time semantics as being underserved. E.g., for understanding the points-to information of a program, can we get the equivalent of the motion trail shadow?

Serving Time in a PL Jail

As a general take away, I really like his view of space/time semantics as being underserved. E.g., for understanding the points-to information of a program, can we get the equivalent of the motion trail shadow?

It is most interesting, I think, to ask how this is achieved within a program or paradigm rather than as an external process that analyzes a whitebox expression. What sort of discipline is necessary to support this feature in an application? What is the impact on modularity? FFI?

Reactive models seem well situated for the problem.

But to make it work within FRP still requires modeling the recording of signals and representing the program itself as a big switching behavior (so we can replace the program behavior and replay the signals). Further, we'd need to be careful about what we effect. An immediate-mode GUI would likely work well, but I doubt we could readily rewind a retained-mode GUI framework.

I've been pursuing an interesting variation with RDP - ability to anticipate the futures of signals. This makes it much easier to separate the prediction or planning model from its clients, and to compose or chain such systems. How far into the future we can reliably anticipate depends on domain, but even unreliable predictions can be valuable - there are many domains where it's better to anticipate and later correct than to hesitate.

Why Anticipation?

I originally developed anticipation as a hidden implementation detail - to support optimistic concurrency for high-performance parallel and distributed systems. So long as I'm correcting the future, and have a an estimate for how much might be corrected (a `stability` value), I don't need to wait to emit side-effects. This is similar to time-warp protocols, except isolated (and secured) by abstract connections. Connections give me something to cleanly `disrupt` if anyone falls behind my threshold for retroactive correction.

A secondary advantage was graceful degradation. After network disruption, quality of predicted signals will rapidly degrade, but there is a short time in which they are still `mostly` good. That isn't much time, but it does offer some breathing room for switching to fallback resources and negotiating a specific logical time for disruption.

I lifted the implementation detail to the user layer in obeisance of principles of secure interaction design. RDP is designed for open systems, and in open systems (no central authority) people will use whatever information the protocol provides them. So if I am to use anticipation in the implementation, support for anticipation should also be present in the RDP interface, lest developers be misled about what is possible. (This is similar to the security motivation for `full abstraction`.)

And by tackling the problem as part of RDP semantics, I can ensure anticipation is consistent with RDP's constraints and principles. It took me a while to figure out that simply shifting a signal (S a) to a signal of signals (S (S a)) was NOT a good idea (for reasons of simplicity and predictable performance), and that instead I should use a simple static time-shift of the signal.

The feature was more powerful than I initially... anticipated, and has become a cornerstone of many RDP design patterns.

I hadn't even imagined how it would support modular composition of prediction and planning systems. But even a simple Markov model is enhanced when we predict multiple future states. And on-the-fly planning systems can use anticipated observations to say, "hey! that isn't what I wanted!" then tweak state to try again. (Granted, that pattern would be terribly divergent if not also used with delay to limit the number of replanning cycles in any given period of time.) I had been looking for a good way to integrate independently developed planners and predictors, and anticipation is a better solution than I had ever hoped to achieve.

Also, there is a broad class of cases that traditionally require state: comparing past values to present values in order to detect change - detect gestures, filter noise, redraw only dirty rectangles, and so on. But these cases work statelessly with anticipation: just compare present value to an anticipated future value. Doing so is more resilient than using the past because the same future can be recomputed after a reset whereas past information would be lost.

In context of Bret Victor's examples:

Anticipation can readily be used to display `shadows` of a future, but stateful history is required to display `shadows` of a past. To achieve both in RDP (in a consistent manner) needs only a simple idiom: display an anticipated future of a recorded history. This composes well with self-stabilizing prediction systems, too - display an anticipated future of a systematically recorded history of state of a Markov model, for example, will make it easy to see both future and past states of the model.

I think RDP's anticipation will do a better job in this role than most other designs, especially in context of an open system where rewind is not feasible.

E.g., impact / taint /

E.g., impact / taint / slicing / etc. analyses are well-known techniques that take away implementation burden, but it's up to the domain specialist to say how that should visually appear in a video game. Instead of trying to do it all in the language, what hooks should be provided to the developer?

This is an interesting perspective on the problem. Rather than try to build "the live programming solution," we could instead focus on making it easier for abstraction providers to build mini-live programming solutions through extensible languages and tooling.

I would kill for a debugger that could visualize time in space (via a motion trail). It can make sense for certain domains, such as implementing a Kinect program (convert your input and output into film clips!), but is much more difficult to think of in general.