Usable Live Programming

A paper I just finished on live programming (and submitted to the normal place for publication), abstract:

Programming today involves code editing mixed with bouts of debugging to get feedback on code execution. For programming to be more fluid, editing and debugging should occur not only at the same time, but also in the same space to quickly make use of the resulting live execution feedback. This paper describes how live feedback can be woven into the editor by making places in code execution, not just in code, navigable so evaluation results can be probed directly within the code editor. A pane aside the editor also traces execution with entries that are similarly navigable, enabling quick problem diagnosis. Both probes and traces are refreshed continuously during editing, and are easily configured based on debugging needs. We demonstrate the usefulness of this live programming experience with a prototype.

I've also cut a couple of videos for the paper, though they might require reading up to Section 3 in the paper to make sense of them:

I promise to do better on the videos in the future :) Also, consider checking out the "It's Alive" paper, I'm surprised no one commented on it yet, but its been really slow on LtU lately! For historical reference, this is the main follow up to my previous live programming paper, but much of the thinking has changed significantly.

Edit2: a brief history of progress live programming as I see it now (note that this is controversial depending on how you define "live programming" or "progress"):

Liveness was applied to UI applications (VisiCalc) long before it was applied to programming. VisiProg might be the first language to explore liveness in programming (85), though I would argue that Sutherland's SketchPad (63), a more limited "language" to be sure, precedes it by about 30+ years, and is the first example of liveness applied to UI or programming. But VisiProg definitely gets the cake for being the first to clearly point out the property; while VisiProg, at least in its XED implementation, appears to be textual.

A bunch of visual languages afterwards support liveness without really calling it such; Tanimoto (91) coins the term liveness to describe what they are already basically doing. Maloney and Smith re-coin the term liveness a few years later (95), independently of Tanimoto, to describe basically the same property in Self's Morphic. Burnett elaborates on Tanimoto's liveness in a nice VPL paper (98). In the mean time, REPLs and fix-and-continue environments abound in the Lisp and Smalltalk communities, but these are hardly live in the sense used here (though they also focus on early feedback).

Hancock is the first to really put the pieces together in his dissertation (03), coining the term "live programming" for the first time and applying the concept to an almost non-visual language (Flogo II). He focuses on the nature of the feedback, going beyond just seeing the result evolve in real time through a mechanism he refers to as "live text". Edwards (04) coincidentally does something very similar with his example-centric programming environment, using different terminology, and a focus on continuously executing examples.

Recently, Bret Victor revived interest (well, at least mine) in this area starting with his Inventing on Principle talk ('12). Although he eschews labels, many of his examples focus on getting better more timely feedback about program modifications, and is quite similar to the goals of live programming. His Learnable Programming essay further elaborates on these topics.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Using LPX to write LPX

How well does this scale? I think that's been the #1 limitation on live programming, is getting it past Fibonacci functions and on towards large applications, interactive web application servers, video games, and the like.

Still, this sort of technique on a single function is probably quite useful when developing simple, isolated units... or for teaching.

It doesn't really scale yet.

It doesn't really scale yet. Read Section 5 for the discussion. The gist is even with incrementalizing the computations, we still have to deal somehow with the possibility of largish latencies. Its not very surprising actually: code just takes time to execute! Once we consider interactive programs, the problem becomes much worse (Section 6). Solutions include reducing fidelity, making the feedback more iterative, and ensure that the latency doesn't distract the programmer (by maybe indicating uncertain results).

But...the main reason I didn't show a compiler example was because my prototype wasn't good enough yet, I still don't have libraries, or niceties like real lists; it would/will work perfectly fine for a compiler operating on a small compiled code example. It also didn't make much sense to show off the UX of the system. But I'm getting there with the next version of the prototype, I just need a real type system...and some major syntax changes.

It works fine on multiple methods, there is no limitation even I didn't show an example. Drats.

Self hosting is an eventual goal, of course!

I wonder what shortcuts can

I wonder what shortcuts can be taken, e.g. for operating on large lists only a few elements and a few summaries need be generated because that's all you can feasibly present in a UI. I imagine that one could inject observer code and recompile just to expose the few necessary elements, and there may be some savings compared to keeping more data.

I've only watched the videos so far. I might tackle this paper after I finish your last one. (I read slow, a few pages per business day at lunches.)

Sure. Usually all your code

Sure. Usually all your code is related, so its hard to just manifest certain parts of the program and not others. On the other hand, probe values are only record if they are shown on the screen, but this isn't a big deal.

Definition of "usable"

In talking about "usable live programming", you take a particular definition of "usability" as implicit from the start. I'd have appreciated a clearer explicit definition of your use case, i.e. offline debugging.

You mention live programming for performance and robotics in passing on page 7, but this kind of live interaction is what many people will associate live programming with, as that's where much of the research and practice has been for the past ten years.

Hey Alex, I had to decide

Hey Alex,

I had to decide between useful and usable, both are loaded terms but go with the thesis of the paper. My point was live programming as defined so far hasn't been as productive as it should have been, mainly because we've been neglecting debugging. Bret's cooking anology in his learnable programming essay brings this issue home. Hancock also comes into play (as we discussed before, his interpretation is real time, but it's not important for this paper, I decided not to argue for steadiness yet).

I'm not denying that this is what live programming has meant, and I'm explicit about it from the introduction on. It's about, or should be about, the feedback! I'm taking and arguing a position which is completely reasonable in an onward submit. And I think taking and pushing a position will make the community stronger, even if not everyone agrees with it.

My passage in section 6 page 7 had more to do with real time vs. recorded, which is not a part of the paper's thesis. Real time does prevent certain things from being debugged, but I don't dismiss it out of hand, it often makes sense. Recorded is something we should explore more; neither really supersedes the other.

Debugging, yes...

I think live programming hasn't been as productive as it should have been because we've been neglecting online debugging. That is to say, debugging a live application, while it's running, without shutting it down or losing data. Including *converting* live data from an older format/type to a newer one.

The future, IMO, is written in 24/7 uptime. "Live programming" is what we're calling the effort to get us there.

Interesting idea,

Interesting idea, but....it's not what I'm talking about. I remember one of my colleagues talking about using concurrent revisions to ensure that the program could be safely changed while it was running.

Offline debugging

Yes, my point was that it was unclear what you are and are not talking about.

The notion of debugging implies an external specification or ground truth which you are trying to match. This assumption simply does not apply in many cases, and indeed this has been the motivation for the development of many of the current batch of live programming languages currently in use.. Systems for creative design, not implementation (and therefore debugging).

Of course it's fine to define "live programming" for your particular use case, but I thought it could be a bit more explicit, and in the context of alternative (and from my view, predominant) uses.

This work is actually very

This work is actually very complimentary to exploration. Say there is no ground truth and your programming is more of a bricolage experience: you still needs to "see" what you are doing; the artifact you are creating is only an end product of heavily replicated and abstracted strands of computation. We need to reason about and visualize the computations, not just their resulting effects!

Debugging is almost always necessary unless you already know what code to write, which in my opinion is not a very genuine programming experience*. Even the designer using Flash Studio or AfterEffects to build a prototype debugs, actually they are very picky and detail-oriented about what they want.

* Well, supposedly Haskell code doesn't need to be debugged, but they could just be saying that because their debugging story isn't that great.

Right,

Don't anybody take this the wrong way now, but, isn't this "live programming" just tools for the sort of hacking that university lecturers around the world had to beat out of my generation? *

I know it makes me sound old and not with it, but I'm really not getting it. Can someone give a very short description of who wants/needs this and why? Thanks!

* That'll be the generation that taught themselves with 8-bit BASIC interpreters.

Depending on who you ask,

Depending on who you ask, the definition is different. But to me, normal programming is like hitting a target with a bow & arrow while live programming is like hitting a target with a water hose. The idea with live programming is that you get continuous feedback about how the program edits, whereas you would have to run the program periodically under normal programming conditions.

REPLs such as BASIC interpreters provide you with immediate, but not continuous, feedback. Of course you can code live in front of an audience, as a lecturer, but this is really something quite different (or something we are arguing about at least).

Who "needs" it? There are people out there who think even IDEs, graphical debuggers, syntax highlighting, and code completion are useless, so this is a hard question to answer. Each feature improves the productivity of the programming experience, but none of them are strictly necessary. That being said, I think live programming could be the next big boost productivity booster since the original graphical smalltalk-like IDEs became popular in the mid-90s.

VisiProg

I would attribute the idea of live programming to Weiser rather than Hancock (VisiProg predates Flogo by about 20 years). Weiser & colleages, AFAIK, also coined the phrase "continuous execution", which you use in your paper. Finding a version of the paper which isn't behind a paywall is left as an exercise for the reader.


Continuous execution: the VisiProg environment

VisiProg

I've never seen this paper before, thanks! I think Hancock should get more credit in this area though, his work was quite forward thinking; much of what Bret Victor says today about making programming easier/more learnable, Chris Hancock was saying more than 10 years ago.

Edit: moved a history of live programming to the article.

Live coding

A potted history should also mention live coding environments like SuperCollider and Impromptu. My impression is that the designers and users of these systems were responsible for bring the term "live programming" into popular usage.

The Wikipedia page cites articles going back to 2003 using terms like "live coding" and "on-the-fly programming", so if you're interested in the provenance of the terminology, I guess this community was also ahead of Hancock.

I'm interested in

The live coding community was ahead of what? Quoting Hancock:

And yet, if making a live programming environment were simply a matter of adding “continuous feedback,” there would surely be many more live programming languages than we now have. As a thought experiment, imagine taking Logo (or Java, or BASIC, or LISP) as is, and attempting to build a live programming environment around it. Some parts of the problem are hard but solvable, e.g. keeping track of which pieces of code have been successfully parsed and are ready to run, and which ones haven’t. Others just don’t make sense: what does it mean for a line of Java or C, say a = a + 1, to start working as soon as I put it in? Perhaps I could test the line right away—many programming environments allow that. But the line doesn’t make much sense in isolation, and in the context of the whole program, it probably isn’t time to run it. In a sense, even if the programming environment is live, the code itself is not. It actually runs for only an infinitesimal fraction of the time that the program runs.

Much of live coding systems are based on plain REPLs; the program is iteratively executed where new effects supercede old ones; you want to put a new method in your code, eval it! They provide feedback for sure, but is that live programming? I mean, only if they define it to be, which is what's happened. I'm sure we have common goals and even rely on common features, but these are two very different fields with a name clash since they've decided to co-opt so much terminology AFTER THE FACT.

One doesn't get to claim N names for their field and then complain when there is a clash.

Living Loops

Assume you have a loop where the body executes in a short, predictable amount of time... such as:

while(true) { try {
  // ... live code goes here ...
} catch(Exception e) { /* the show must go on! */ }}

Under this condition, I think effects such as `a = a + 1` can be reasonably argued to "start working as soon as you put it in". And such a design is certainly compatible with incremental sound or immediate-mode UI.

But organizing code thusly involves a fair amount of discipline, e.g. to eliminate use of blocking calls, avoid divergent inner-loops, use incremental computing for expensive functions, memoize or cache expensive computations, choke those that may occur at lower frequencies, and recover swiftly from erroneous code that might damage state.

Live coders seem willing to accept a fair amount of discipline. They are performers. Code is their instrument of choice. Any performer needs to be disciplined within the art. Though, they might appreciate new tools if they don't overly constrain.

But from the PL side of things, the goal is more to reduce the need for discipline, even to make this the default programming experience for everyone.

Check out Section 6, which

Check out Section 6, which talks about time. I'm not tackling the time problem in this paper, though I have a lot of ideas on what the problems and solutions are.

On the PL side, your observations are all spot on. I'm hoping to tackle this problem with procedural code, but that might be a bit optimistic. We can easily get rid of blocking calls in an immediate interpretation where polling works (and can be transparently transformed into a block call). We can put a ceiling on the amount of computation a block can do before it fails, forcing incrementalization, although this is harsh. Perhaps I'll swing back to declarative eventually, but the expressiveness problems always bite me.

In section 6 you mention

In section 6 you mention record&replay and live feedback, which are effective for different problems and use-cases.

Interestingly, you can combine them and get a balance of both worlds. Model state formally as a view of history, even better if it's a windowed history, e.g. the last minute, or a decaying history. Though a bit less flexible than ad-hoc imperative state, such a design covers quite a few use-cases in real systems, and (like record and replay) provides rich context for the temporal impact of edits.

I am curious what other ideas you might have for addressing the time issue. (And this is effectively the same problem as robustly integrating stateful plugins in a live system.)

Ya, there is also no reason

Ya, there is also no reason the recorded event stream can't get longer during a live programming session, lets call that level 5 liveness! The problem remains that changing the past often isn't an option, so real time live programming is necessary. Recording event streams also has a non trivial cost...there are pragmatic reasons why we'd want to go with real time, but not being able to debug state transition is a show stopper for many kinds of programming tasks (modern computer vision....).

Live coding

And much of live coding systems are *not* based on plain REPLs.

Live coding and live programming has been used interchangably within computer music since around 2002. TOPLAP has live programming in their acronym, and was founded in 2004.

Ahead in terms of coining

Ahead in terms of coining the phrase "live programming". You attribute this to Hancock's masters thesis, but the term was already in use in 2003.

Citation? I googled it up

Citation? I googled it up even back in 2007 when I first discovered it.

live coding == live programming

Give a Hancock a little badge for saying "programming" instead of "coding" if you want.

And they are completely two

And they are completely two utterly different topics. One is focused on programming, the other one on performance/social experiences.

Programming is important dammit!

No

It's the same topic. It's just that live programming languages have found direct, real world use in the a/v field, as you might expect for technology oriented around live feedback.

This has tended towards work in social situations, but not exclusively -- live coding for lone composition of music and graphic animation is also an important and active topic.

Live coding

To call them equal is a little unfair, I think, because the 'live coding' community has successfully usurped the phrase 'live coding' to mean something much more than mere programming. They say things like "it isn't live if you don't have an audience" in the same sense that performers around the world might - i.e. a real-time live performance, not a recording (which is lower risk since you can discard a bad take).

Live programming, or at least the sense to which I'd like to focus it, is about real-time semantic feedback from program (or programming environment) to the developer.

Even if 'coding == programming', the word 'live' is used in two entirely different senses. This is perhaps obfuscated by the overlap of technologies and patterns between them.

This is a quite clear and

This is a quite clear and eloquent description of my position, thanks!

I don't really make this point in the paper, and its not a very important battle to fight. The live coding community has its own worthy goals* that they are achieving well enough. We are just fighting over name real estate, but in reality both of our communities are niche enough that most anyone else won't get confused or really care.

I am replying more to Resig's (Khan Academy) as well as my own previous work. We haven't done very well with programmer-oriented feedback, leaving us with systems that look cool but do not accomplish our own "make programming better/easier" goals. Bret Victor really makes this point much better than I could:

The programming environment exhibits the same ruthless abbreviation as this hypothetical cooking show. We see code on the left and a result on the right, but it's the steps in between which matter most. The computer traces a path through the code, looping around loops and calling into functions, updating variables and incrementally building up the output. We see none of this.

All the work that I listed concern this problem, and the answers have trickled out over 30 years.

------------------

* Ninja edit; wikipedia's intro on Live coding:

Live coding (sometimes referred to as 'on-the-fly programming', 'just in time programming', 'live programming') is a programming practice centered upon the use of improvised interactive programming. Live coding is often used to create sound and image based digital media, and is particularly prevalent in computer music, combining algorithmic composition with improvisation. Typically, the process of writing is made visible by projecting the computer screen in the audience space, with ways of visualising the code an area of active research. There are also approaches to human live coding in improvised dance. Live coding techniques are also employed outside of performance, such as in producing sound for film or audio/visual work for interactive art installations.

Contrast to the original definition of live programming given by example in Hancock's thesis:

Among the essential ingredients in Flogo II’s comparative transparency is that it runs “live.” The code displays its own state as it runs. New pieces of code join into the computation as soon as they are added. In the story so far, we can see the value of liveness in quickly resolving questions, such as: is the sensor hooked up correctly? Is it registering the ball all along the track? What range of values is it reporting? Which direction corresponds to positive speed, and which to negative? Why is the first program producing such jittery motion?

The focus on the former is improvised interactive programming with an audience. The latter is purely about feedback to the programmer.

Hancock

I think that's a misrepresentation of Hancock's thesis.

For example:
"Live programming allows a much more open and fluid relationship among the program, the robot and its environment, and the programmer."

Certainly not "purely about feedback to the programmer".

More Hancock

"These differences all relate to the liveness of the programming environment. Tanimoto (1990) distinguishes four levels of liveness, or immediacy of semantic feedback, in programming languages (Fig. 14). According to this scheme, Flogo I is at level 4, because it not only provides immediate feedback to incremental changes in the program (level 3), but also an ongoing portrayal of the program’s actions in response to input streams and user interface elements."

Your work on off-line debugging is edit-triggered, at level 3 liveness. Hancock's shares my interest in wider live interaction at level 4 liveness.

This is not to say that level 4 is necessarily better than level 3 -- that depends on the domain of work. But over the past ten years, it seems that live programming at level 4 has been the focus.

Bret Victor

Programmer feedback is important, it is the refusal of the live coding community to acknowledge that leads to the rebuke from Bret Victor on live coding:

Some programming systems attempt to address this with a so-called "live coding" environment, where the output updates immediately as the code changes. An example of live coding:*

... example of live coding ...

As you can see, live coding, on its own, is almost worthless. The programmer still must type at least a full line of code before seeing any effect. This means that she must already understand what line of code she needs to write. The programmer is still doing the creative work entirely in her head -- imagining the next addition to the program and then translating it into code.

*Recently, some people have mistakenly attributed the "live coding" concept to me, but it's not a new idea, it's certainly not "my idea", and it's not a particularly interesting idea in itself. Immediate-update is merely a prerequisite for doing anything interesting -- it enables other features which require a tight feedback loop. An action game with a low frame rate is a bad game, but simply upping the frame rate doesn't magically make a game good.

You could argue that Bret doesn't get live coding, but note that most of Bret's designs in his essay are not instances of live coding (by your own definition) since the code doesn't run in real-time. Anyways, the point is that the emphasis in this work (and mine, I believe) is on better programmer feedback even if the technologies (immediate-update) are similar.

I'm not saying I agree with everything that Hancock said, and I definitely do not agree much with Tanimoto, but their emphasis is on significantly bettering the programming experience is what I'm interested in. I can't debate live coding approaches much at all, since nothing I've read indicates that their goals are even similar to mine.

Hancock

I think you're referring to Victor's slapdown of Khan Academy there. AFAIK he hasn't had any interaction with anyone at TOPLAP, but could be wrong.

Like I said earlier, we're not just talking about live coding in front of an audience.

Hancock references conceptual metaphor, the extended mind hypothesis, reflective practice and epistemology, and talks about music education, interactive robotics etc. He certainly is interested in experiential interaction.

Manipulation feedback is extremely important to live programming, I'm happy to acknowledge that. I agree with Hancock that there's more to live programming than that, though. Social aspects *are* important to programming, including live programming.

Look, I'm not arguing that

Look, I'm not arguing that your work isn't interesting or useful. I definitely read your papers when they come out with interest; I understand that the "social aspects" are interesting and I'm glad somebody is looking at this.

Hancock never went there, he didn't need to, he already had more than enough content for his dissertation just focusing on programming experiences. Focus is important here: if you claim to do everything, you dilute your message to the point that people stop listening. If you give a live coding talk to an audience expecting Bret Victor-like innovations, they will be disappointed, not because live coding was boring but because they were expecting something else! It doesn't help when someone mislabels our work as live coding; it creates false connections and sows confusion. They see the immediate feedback on the screen, and just apply whatever label they known about to it without pondering the implications of that.

In the meantime, there are plenty of people who are just working on the programming experience also. As Bret's talks and Granger's Light Table has shown, there is HUGE INTEREST for this in the general programming community. People are hungry for a better programming experience; and we (the PL community) should do more work here because this is what we are supposed to be about.

Mislabeling

In my opinion, they're not mislabelling, live coding and live programming are two sides of the same coin. TOPLAP has used both, interchangeably:
http://art.runme.org/1107861145-2780-0/livecoding.pdf

However, if a clear definition of "live programming" does emerge (e.g. from LIVE 2013), I'll happily adopt it.

I don't think you can hope to reclaim the term "live programming" for level 3 liveness though, when it's been used for level 4 liveness up to this point. If you want to be specific, why not use a more specific term?

Not about levels

Sean is claiming "live programming" for "the emphasis on better programmer feedback". And Hancock's students still end with a program that sticks around and keeps the ball rolling. The program is the product.

Does level 4 liveness result in better programmer feedback? It seems that way, at least for the particular problem involving a simple, iterative, teeter-totter motion. Maybe the answer would be different for a different problem, like developing a FORTH Warrior that must eventually navigate complex worlds without supervision. A sample of size one is just a little small.

Regardless, the emphasis is on programmer feedback, the program is the product, and these seem to be the useful distinctions.

Compare: what do you believe to be the emphasis and product/purpose of live coding? The ephemeral code, the social and artistic experiences, the recorded work? An answer, from an interesting paper you posted this morning:

We wear code by running it, constructing environments that we listen and dance to. In dancing the encoded meter, we set the ground: we find an implied pulse and feel it with the whole body, where the pattern is experienced in contrast. By stepping into the music, we become part of the program interpretation. But, when we are finished, we are left with nothing. Live coders knit a live fabric, not an end product; we can touch it, but then it is gone.

-- The Textural X: Knitting with Time

It seems to me that programming - with an intention to construct a 'program' - would result in a very different experience than using the same technology for live coding.

Yes I think you've hit the

Yes I think you've hit the nail on the head, my primary interest is in live coding as ephemeral. This is a very useful distinction, and useful for differentiating performance, domain specific-work (including e.g. end-user throwaway spreadsheets for working things out) from building more general purpose systems.

I'm not convinced that calling one live coding and one live programming is helpful though, although if that crystallises, I'm happy to go with it (although reserve the right to be interested in both). Ephemeral vs professional programming, or maybe just live end-user programming vs live professional programming are perhaps more explicit terms.

Have a look at Sam Aaron's recent work on hyper-agile programming (sorry Sean, another term) for a bridge between ephemeral and professional programming. Also Andrew Sorensen's in-review paper posted to the TOPLAP livecode list, about composing systems.

(Sorry in a rush, lots of deadlines at play..)

Liveness

Yes this is an important distinction to make. Chris Nash makes it clear, see the diagrams at the end of this paper:

http://www.eecs.umich.edu/nime2012/Proceedings/papers/217_Final_Manuscript.pdf

There are different feedback loops at play. There's the feedback between the programmer and the code, and also between the programmer, code the output and back to the programmer.

Sean is only interested in the manipulation-loop, in stereotypical programming situations. So are music composers etc.

My interests lie in manipulation-driven feedback. This includes *both* the manipulation loop *and* the loop through the music/environment.

I don't think live coders generally do say "it isn't live if you don't have an audience", and if they do I would disagree. However, there is something rather special about manipulation-driven feedback with an audience, as an additional third feedback loop is added.

Non-Technical Distinctions

That's an interesting distinction, into different feedback loops. It's probably even a useful distinction. But I'm not convinced it's a right distinction (in this context).

Feedback through the output is certainly part of the live programmer's experience. That output might be static (like the result of a pure function call), interactive (like a GUI), even animated (like mobs and music in a tower defense game). In every case, we want the feedback. Effects are part of semantics.

I propose instead: the relevant distinction is the product (or goal, purpose).

At the end of the day (or cycle), a live programmer wants a complete program - one that can operate without human intervention, that can be packaged up and sold to a million clients. Direct manipulation of code might never be exposed to the end-user.

The final code from a live coder is often useless on its own. Indeed, you sometimes emphasize this by deleting your code at the very end. If you have a timed recording of code edits and other inputs, you might be able to replay a session (or branch it and try something new). But the real 'product' is the (potentially ephemeral) effects during edit.

We still haven't found strong technical distinctions between 'systems programming language' and 'scripting language'. The same is true here. Any technological advance in the live coding community can be applied to live programming, and vice versa, though the difference in emphasis and purpose will result in varying levels and rates of acceptance.

The pedantic technical distinctions we geeks seek oft are weak. But I believe the economic and social distinctions, while a little fuzzy, are obvious and practical.

I agree this is a relevant

I agree this is a relevant distinction. It's not immediately obvious to an outsider why you'd call one "live coding" and one "live programming", but I can see that this is what we are effectively doing a lot of the time, so definitely useful.

A complementary way of looking at this is the different relationships between the programmer's and the program's timeline. If the product is the end result, then the two timelines are separate. If the output is a straight recording of the program being modified, or if there isn't a recording at all, then there is one shared timeline.

There is a third option - that a program's edits are recorded, and that recorded edits are later edited. This approach makes senes in music composition - improvise on a theme in realtime, and refine it later into a composition. Using your distinction, this is a bit like starting with live coding, and refining into an end product through live programming. However the end product isn't the end-point in RCS, but an edited revision history. Julian Rohrhuber called this approach "programming programming" at LOSS Livecode in 2007, showing a system that allowed code edits to be sequenced live.

Similarly to the third

Similarly to the third option, one could actually model stateful programs in terms of self-modifying code. I.e. the toggle-box on a UI might correspond to (literally) changing a particular code string from 'false' to 'true' or back. Even external signals such as mouse position can be modeled this way (~ live coding 'under-the-hood' even if not directly exposed to the client...). There is probably a good relationship to object prototypes, too.

History of live programming

Sean, your history of live programming is puzzling, not only because it ignores all the innovation in live programming languages over the past 10 years, but also because it ignores all the *practical* success throughout history.

I really think any definition of "live programming" which discounts VisiCalc, Max/MSP, PureData, Fluxus, Overtone, SuperCollider, Impromptu, Extempore or Gibber as relevant sources of interest, needs to find another term to use. Because, these systems are in active use in the field as successful, practical systems.

Please do continue your fine work researching edit-triggered liveness in off-line debugging, but even the history of live programming you give is certainly not dedicated to that.

Alex, you should just point

Alex, you should just point out specific examples if I'm wrong. How was supercollider and impromptu specifically innovative in programmer feedback compare to the systems that preceded them, or were they more oriented around their application. You can't tell me I'm ignoring where there isn't evidence. There are plenty of papers published around these systems, we know what they did and how they work. You continually refuse to provide specifics.

We have been using the term live programming for awhile and....you can't claim every freaking term that exists, how many do you have now, five or six? It's just not right, so stop. We'll just let the public sort it out.

Calm down

You've been pointed to specific examples: PureData, Fluxus, Overtone, SuperCollider, and Impromptu, among others. There are certainly different senses of "live" at play, but they're not mutually exclusive. I suggest chilling out and not seeing it as a competition to own the phrase "live programming". Check out some of the videos on TOPLAP if you want to see someone edit a program while it's running. It might be less glamorous than Bret Victor, but it has one thing going for it: it's not a mock-up.

To write the live coding community out of the history of live programming is disingenous and, as has been suggested, to ignore the practical successes of live coding.

PureData, Fluxus, Overtone,

PureData, Fluxus, Overtone, SuperCollider, Impromptu, ChuCk, Quartz Composer, etc...are very nice systems, and I've read about them. They do not innovate in programmer experience; instead they are tools to applied to problems in their domain. That is definitely good, but its not what I'm looking at here. Again, I'm looking for enlightenment, not application.

To write the live coding community out of the history of live programming is disingenous and, as has been suggested, to ignore the practical successes of live coding.

Then please correct me. List any of the insights and innovation gained in these works. If you were writing a related work section for a paper, what would you list about each system that is related to your own work...say, if you were writing a paper on retroactive update.

Specifics

Sean, that's quite an accusation - I don't think I have ever refused to provide specifics, I'm happy to do so when asked.

I don't know why you think I am stealing the term "live programming" from you. I share the same reference points you give in the OP, it's just I think there are some gaps.

I'm actively trying to understand how you are using the term "live programming" and "live programming language" and use it in the same way.

This has been difficult because you hold up Hancock's thesis as defining live programming, but then disagree with just about every definition in there, including what steady feedback and live edits are.

I think using live programming languages in live musical performance is itself pretty innovative, providing a new domain for programming language design, and all these languages have provided innovative approaches to try to better serve that domain.

Here's some examples off the top of my head (leaving out any of my own advisedly):

- Impromptu - Temporal recursion
- Reactable - Euclidean syntax
- Daisychain - interactive petri nets
- Fluxus - live editor inside the 3D world that it is programming
- Republic - code as multi-user chat system
- Gibber - multi-user javascript

In some cases this has been a process of taking existing techniques in computer science and applying them to a new domain such as live music or video. But features of these domains, the ephemerality, temporal restrictions, time pressures etc, have reflected back and required innovations in programming language design.

At this point, you disagree with much of what Tanimoto and Hancock say about live programming, whereas I agree with them. Yet somehow I am the outsider, and have no right to use the term?

Just because you don't value any of this, that doesn't mean it isn't live programming language research.

This has been difficult

This has been difficult because you hold up Hancock's thesis as defining live programming, but then disagree with just about every definition in there, including what steady feedback and live edits are.

No I don't really. Live programming is about improving the programming experience with live feedback. Hancock gets that. We can disagree on specifics of his system, but we are disagreeing with respect to the same goals; disagreement is actually meaningful and useful. You try to dissect Hancock's work to match your narrative, but if you read the dissertation, it really is just about the programming experience. Disagreeing with you is difficult; our goals are are just radically different so we often talk past each other. In this case, we are not disagreeing about realization, we are disagreeing about goals in the first place.

In some cases this has been a process of taking existing techniques in computer science and applying them to a new domain such as live music or video. But features of these domains, the ephemerality, temporal restrictions, time pressures etc, have reflected back and required innovations in programming language design.

I argue that none of this applies to "programming" but are rather necessary artifacts of "performance." If I wrote any of this up in my related work section, I would just get some WTFs from my readers; they don't get the context, my work hasn't pushed that context; its irrelevant.

At this point, you disagree with much of what Tanimoto and Hancock say about live programming, whereas I agree with them. Yet somehow I am the outsider, and have no right to use the term?

Like I said, if our goals are similar, disagreement is meaningful and even expected; that is a debate that we can have intelligently! Now, if our goals different, then debate is not meaningful.

Just because you don't value any of this, that doesn't mean it isn't live programming language research.

Please don't say I value this. I do value it (if we are talking about the performance work), but its just not related. Much work is done on computers, does that mean this work is related to each other? If someone from the 1950s saw multiple demos on a computer, he might think so...but in reality they are not.

Your community has, intentionally or not, retroactively claimed any work at all that involves live feedback. "I see things moving on the screen while Bret Victor is coding...it must be live coding!" When in truth there is no relationship; just some common techniques. This has led to a massive misapplication of terminology, and why live coding is now known under 5 or 6 terms. But the truth is, live coding doesn't have N names, there are just N misclassifed concepts with some commonalities.

Hancock

AFAICT Hancock simply does not say what you say that he says.

He talks about level 4 liveness, you are focused on level 3.

Your related work section does not need changing, did anyone say it should? I do think the scope of the work could be clarified though.

I don't agree that

I don't agree that Tanimoto's classification system is very useful these days.

What does level 4 liveness even mean when programming a compiler that runs for 100 milliseconds anyways? The scale only becomes sensical when we are talking about interactive programs, which are not even covered by the paper. I talk about real-time and recorded styles of liveness in Section 6, but this is mostly for the benefit of guiding future work. There is still much more debated to be had there, and I clarify my scope by saying that the work as presented does not handle interactive programs.

Level 4 liveness

The difference between level 3 and 4 is live interaction. That's as relevant as ever.

That only applies if the

That only applies if the program has any interaction. And even then, level 4 liveness could be a very horrible experience if the program is processing events at one per 50 milliseconds or even faster.

ChuCk, Improptu support level 2 liveness by Tanimoto's definition, since they are basically REPLs (any system that depends on explicit refresh and merely interposes state updates on the program's current state). Which live coding systems support level 4 liveness? I can think of Max/MSP, PureData, Quartz Composer (any visual data-flow language actually)...but none of the languages commonly cited specifically for use in live coding sessions. I'm going by Burnett's description of Tanimoto's classification system:

Tanimoto described four levels of liveness. At level 1 no semantic feedback about a program is provided to the user, and at level 2 the user can obtain semantic feedback about a portion of a program, but it is not provided automatically. Interpreters support level 2 liveness. At level 3, incremental semantic feedback is automatically provided whenever the user performs an incremental program edit, and all affected onscreen values are automatically redisplayed. This ensures the consistency of display state and system state if the only trigger for system state changes is user editing. The automatic recalculation feature of spreadsheets supports level 3 liveness. At level 4, the system responds to program edits as in level 3, and to other events as well such as system clock ticks and mouse clicks over time, ensuring that all data on display accurately reflects the current state of the system as computations continue to evolve.

So any REPL or fix-and-continue system is necessarily at level 2. According to this interpretation, which live coding systems have actually achieved level 4 liveness?

For completeness, let me

For completeness, let me also include Tanimoto's original definition here:

At the first level or ``informative'' level is the visual representation of a program which is not used by the computer but used only by the user as an aid to documenting or understanding the program. Such a representation is most commonly a flowchart which is meant to accompany a program written in a language such as FORTRAN.

At the second or ``informative and significant'' level, a visual representation is the actual specification to the computer of what computation is to be performed. If this takes the form of a flowchart, then the flowchart (which then exists in the machine and is probably displayed on a CRT) is executable and contains enough details so that the program is completely specified. After the user has prepared a flowchart using an interactive editor, he may submit it to the system for execution.

A visual representation at the third level of liveness is one for which any edit operation by the user triggers computation by the system. It is thus not necessary for the user to explicitly submit his flowchart for execution. The system attempts to execute or re-execute the appropriate parts of the program whenever the user changes anything. A mouse click or button push by the user might be the physical means of triggering the re-computation. As soon as the recomputation is complete, the system becomes idle once again, except perhaps to track the mouse and assist with the next editing operation.

At the fourth and most live level of liveness is a system which is continually active, or potentially so. The visual representations in such a system are informative and significant as described above, and the system is responsive as in the third level of liveness. In addition, however, the system continually updates the display to show the generally time-varying results of processing streams of data according to the program as currently laid out. Such a level of liveness is analogous to what one would have if one physically changed wires and parts on a video-processing circuit board while it was plugged in and running. Of course, one difference is that on the physical circuit board, one is concerned about blowing out transistors---but in our software-development environment, the components are all instances of software objects and cannot be destroyed by feeding them improper inputs.

It seems like Tanimoto isn't considering state in his definition, which would lead to much confusion in later interpretations.

This needs a proper review.

This needs a proper review. Ixilang, fluxus (and children) and feedback.pl are examples, but tanimoto is talking about visual dataflow languages (pd, max, reactable). Others achieve it via the performance feedback loop via Nash's extension, although from a notational manipulation perspective they are indeed at level 2 or 3.

request for writing assignment

I find the slight amount of grief you're getting over the word live somewhat interesting, which makes me inclined to help you. (Also, you usually seem like a good guy without a contentious attitude -- in short, you seem to have some empathy, which is good for your karma.) As a highly visual person, I come to word games as a non-native speaker of sorts, and I can be analytical about it because writing is just a process to me, with objectives and practical constraints.

For example, if you look at it as a game, you might need to use alternative terminology if folks want to poison the well for a preferred usage of live as a term. In particular, an exploration of synonyms can help, provided you look at actual phrasing obligations when using certain terms in situ. (A "bad" term can obligate you to say things that mislead in the context you want to pursue.) Even better, if you develop a suite of interchangeable ways to say similar things about a topic, but covering different angles, you can write clarifying definitions without re-using the same words, which become less effective each time repeated. Any word said too many times just induces desensitization.

What do you want to happen? I mean this in more than one way. Skimming the paper, the abstract and conclusion emphasize the value of rapid feedback cycles when programming, with a practical focus of showing a plausible runtime version of code executing in the same environment where editing occurs (if I'm getting this right). I get from a this a desire sounding like: I wish programmers had a better (more enjoyable? more addictive?) way to do this. That's still slightly vague, and slightly centrist with respect to both more general and more specific things you might want. Going both more abstract and more concrete to bracket the context may help a bit.

Apart from that, what kind of feedback do you want? Or what kind of reaction do you want, and from which tribe? In the long term, where do you see this sort of research going? In the short term, what are folks doing now you want to occur more or less? Or how would you edit a specific remark someone says here?

I was going to make you an offer a couple days ago, to spur discussion. Now you're getting it already, but it doesn't look very useful to you yet. So maybe you still need this. Tell me one idea in the paper to make sound good, and another to make sound bad. Then I'll write a dialog where those two things happen. I might have to cheat if you ask me to make a lame idea sound good and vice versa, but I can blame it on the characters.

In one of your responses above you mention the role of focus in avoiding too many tangents on side issues that cause folks to stop listening. Yes indeed, that matters a lot, especially when there's too much one can say. Also, if you really want one point to stand out clearly, saying less is better. But you have to pick your points.

Crash: Whom the gods would destroy, they first give an LtU audience.
Wil: I don't think that's how that saying goes.

For example, if you look at

For example, if you look at it as a game, you might need to use alternative terminology if folks want to poison the well for a preferred usage of live as a term. In particular, an exploration of synonyms can help, provided you look at actual phrasing one is obligated when using certain terms in situ.

The problem is that the live coding already has 5 terms; any new term will just be co-opted into the cause: "oh look, something is happening on the screen while programmer is typing, it must be live coding!" Then the term is grabbed as an alias for live coding. Resig proposes calling then the style of programming with feedback "responsive programming", but I have no guarantee that this won't be co-opted in a similar fashion. Bret Victor and Jonathan Edwards tend to eschew labels, perhaps because they don't need them or they just seem to be ammunition.

Skimming the paper, the abstract and conclusion emphasize the value of rapid feedback cycles when programming, with a practical focus of showing a plausible runtime version of code executing in the same environment where editing occurs (if I'm getting this right).

Yes, its a simple message. There is not much room for dogma and ideology in an academic paper, even in an Onward submission. Just be self-contained and to one focused point.

Apart from that, what kind of feedback do you want? Or what kind of reaction do you want, and from which tribe? In the long term, where do you see this sort of research going? In the short term, what are folks doing now you want to occur more or less? Or how would you edit a specific remark someone says here?

I'm interested in feedback from the PL tribe, which is why I posted it here. We are supposed to reinvent programming right? Everytime I look at a PL conference program, I get a bit sad; its the same thing over and over again. We need to be more aggressive and take more risks, even if it might cause someone to scratch their head.

Tell me one idea in the paper to make sound good, and another to make sound bad.

Good: we can get rid of our debuggers and focus on programming (editing and debugging at the same time). There are no mode switches; feedback is instantaneous.

Bad: the idea might not yet be technically viable; the UI is still too messy; we have no definitive strategy to deal with time.

In one of your responses above you mention the role of focus in avoiding too many tangents on side issues that cause folks to stop listening. Yes indeed, that matters a lot, especially when there's too much one can say. Also, if youi really want one point to stand out clearly, saying less is better. But you have to pick your points.

This is quite true. Writing a paper is nice because we can reflect on our message with razor focus; in a real time (or almost real time) discussion, I'm much less effective because there is someone on the other side arguing with you.

I don't think it is bad though. Sometimes its quite useful as material to consider for the next paper.

undead programming

[While the original version here was fun to write, I decided it wasn't that useful otherwise, so I'll write another. If you want the old post for some reason, I can append it as a block quote to this one.]

I'm having trouble writing a serious dialog. The first aimed more for humor—not especially good—and was thus mostly off topic: not about programming languages nor their design, consequences, limits, ramifications, etc. So it was bad precedent of low value. After a few hundred words, a second dialog I drafted yesterday crashed due to extreme negativity, because focus drifted into causes of bad and unstable software, which is only related to why we want good feedback about what code does, so I scrapped it too. Staying on or off message might make a good meta topic.

Most things we say carry multiple implicit messages (along with explicit parts) conveyed by context, premises, tone, social stance, metaphors, allusions, emergent ontology, sentence and word length, clarity, agreement, consistency, and anything else that can alter how a reader grasps text. As an example of context, my intended meaning of the following exchange benefits from this intro:

Wil: What implicit assumptions underly all your remarks?
Dex: My friends and tribe, including me, are always right, clever, useful, and admirable.
Wil: You left out good looking.
Dex: That simply goes without saying. Just keep adding positive qualities.
Wil: And what about others? Tribes, groups, and folks off your friend list?
Dex: Not very good at antonyms, are you? Wrong, dim, useless, and laughable.

If you write a dialog without surrounding context, it grows its own, and it can veer off in surprising directions. That can be amusing if the goal is art, but self-interfering if the goal is tight, focused message. (If you design characters carefully, their predilections can keep a dialog in territory you wanted ... maybe.)

Anyway, most communication sells and advertises numerous premises mixed into overt propositions. Some of them are like Dex's (he's awesome), but others go all over the map from humanitarian to product shilling. Even absence of interlaced implicit content can say things like honest, lucid, and objective.

Information typically leaked through implicitly is whatever a speaker wants understood or merely dramatized as true, for whatever reason they find gratifying (ego in Dex's case). For any given proposition, it's not hard to come up with a lot of pros and cons. By mostly ignoring either pros or cons, slanting a presentation one way or another can still seem (incorrectly) like it's balanced when sufficient detail is present (signalling completeness), or when a simulation of antagonistic advocacy occurs in real or fictional dialog (signalling social proof).

Personally, the kind of analysis I like to see both builds up and tears down a proposition, pursuing either as aggressively as looks productive. Then I'm likely to agree a balanced treatment was done, as long as both occur with equal inspiration. So an analysis that doesn't aim to trash an idea isn't very useful. Kid gloves are a red flag if the soft touch is clearly the party line, less so when a presenter is ready to change hats on demand. A quality I often admire most in a good presentation is an effort to prove the presented thesis wrong.

When we look at live programming for programmer feedback about code's behavior, I worry about the programmer getting less than a balanced treatment in feedback — equal pros and cons presented — because of bias in the implementor of live programming tools (let's flatter the customer), and because of bias in the end user programmer (stop looking after any good news occurs). An inclination to confirmation bias skews interpretation of evidence towards positive outcomes, as defined by the examiner. (A failed serious dialog I wrote yesterday got vicious and mean-spirited about this, like hoodlums breaking windows just because they can. Not really useful.)

Programming languages can have designed into them a bunch of qualities related to what's practically useful in the context of programmer behavior at coding time and debugging time, as well as useful in other contexts like working with others, integration with apps, code evolution and maintenance, property discovery via tools, or affordances in dynamic presentation of runtime execution behavior like live programming. When you think about live programming, it would be interesting to consider changes you might make to a language's syntax and/or semantics that would make it easy, clear, and effective at revealing both good and bad behavior in code. That's the angle that has clear PL relevance to me. But it's also fun to hear about live programming, even without altering language design, to get a different perspective on what can be known, in case this improves insight in programmer mental models.

While it's not my place to say so, the sub-thread on semantics of "live" related to performance or audiences seems completely off-topic on this site, unless there is some way to comment on how a PL design might somehow be better or worse suited to a role in interaction with an audience. The interesting question to me is usually, "And how might you define parts of a PL to better suit this circumstance considered now?" When we veer off into implementation issue discussions, for example, that still seems to bear upon PL design related to realizing goals at runtime, as well as other times noted in the last paragraph. I draw a blank when it comes to audiences grooving though.

semantics of "live" related

semantics of "live" related to performance or audiences seems completely off-topic on this site, unless there is some way to comment on how a PL design might somehow be better or worse suited to a role in interaction with an audience

There certainly are many (MANY) PL considerations that are relevant for live interaction with an audience. And many of them intersect with considerations for PL-as-HCI.

For example, in front of a live audience a program can't break just because some syntax is bad. It needs to do something reasonable. A PL could be designed to fill certain gaps in a specification with something reasonable, and thus support iterative refinement.

And there must to be a certain 'smoothness' of transitions (no audio/visual hiccups) that might not be essential live programming.

The PL aspects of live coding are on-topic for LtU. And I think development environments are also a good topic for LtU, at least with regards to innovations.

okay, I'll back offf

(This is a good way to discuss: I say no, you say yes, then we go over points and it's interesting; maybe I learn.)

I'd like to see relevance since I don't yet. Can you help me by describing one especially concrete example of changing a PL so it better suits audience interaction? You must have particulars in mind, as opposed to just requirements that a UI be able to cover for bad syntax (say by using an old version until a new one is accepted). How would syntax (or semantics) change because you wanted that option? It sounds interesting but imagination fails me.

Maybe you're using an abstract definition of language meaning the interface to something, the same way you can look at the total API of a library as a language for using the library, with it's own grammar in model use, etc. Pushed to an extreme, you might say the way a language is presented is part of that language — or maybe a different, related language. I can write a character selling a notion that, as long as you're programming, everything is part of interface and is therefore part of "language" and is on-topic for PL discussion. But this does seem to dilute language to meaninglessness, if it means everything, and we drop L from PL.

As context, dev environments are good to evaluate what is possible and useful to do in a PL. (Some language features might not be reasonable or cheap enough without automated enhancement of human capability.) With JS and distributed code in browsers, we see the ecosystem of domain context for an app and the tools themselves intrude into our grasp of what a PL means. The usefulness of a language depends in part on what you're trying to do. The value of C is often described in terms of low level OS or systems coding for example, and value dries up as you move away from that. Maybe each language design targets affinity for goals in a problem context, like Rust targets memory safety in C-like app contexts. I enjoy piecemeal analysis of that sort of stuff myself. (Doing it for live programming would legitimize it as a technical topic, as well as engage folks.)

My not getting it is partly my fault by being biased toward plumbing. I have the same reaction when folks discussing distributed web apps turn out to be talking about browser presentation in JS, and it startles me that it's only about how things look. Looking at an iceberg, the bouyancy of parts underwater as cause is more interesting than parts above water as effect. But that's just me. Anybody can look at the surface, I want to know what's going on underneath. That's the only part of live [coding] I'd ever care about.

(Edit: as you can see, I'm conflating live programming and live coding. I probably mean live coding above, if that's the one that means performance, and not McDirmid's live programming from the OP.)

Can you help me by

Can you help me by describing one especially concrete example of changing a PL so it better suits audience interaction?

If we are talking in-situ ephemeral programming here, any shell-based language (e.g. CSH or even Mathematica) would suffice as an example. There is definitely a case for a language designed for rapid/real time problem solving. Its not my thing, but its definitely a thing :)

I can write a character selling a notion that, as long as you're programming, everything is part of interface and is therefore part of "language" and is on-topic for PL discussion. But this does seem to dilute language to meaninglessness, if it means everything, and we drop L from PL.

If you approach PL from a theoretical point of view, this is definitely appropriate criticism, but I would argue that many of the papers in venues like POPL are pure theory endeavors that have little to do with programming! If you see language as an interface between programmer and computer, then you might want to look at the other parts of the interface, and make better trade offs in language or editor or some other tool. I truly believe that a "programming language designer" is more of a "programming experience designer," that we increasingly have to care about the whole stack.

I enjoy piecemeal analysis of that sort of stuff myself. (Doing it for live programming would legitimatize it as a technical topic, as well as engage folks.)

I think the main point about live programming, focusing on "feedback loops," makes it very much a whole stack thing. This is why Hancock or Tanimoto were exploring it way back in the context of VPLs/HCI and no one in traditional PL got it or would even go near it. This is why the live coding community, which includes a lot of humanities crossover, is looking at this instead of PL researchers. We have to become more flexible as PL researchers I think, or we will cede our field completely to people like Bret Victor who "get it" better than we do.

Anybody can look at the surface, I want to know what's going on undernearth. That's the only part of live programming I'd ever care about.

I don't understand. Do you mean too much focus on experience and not enough on implementation? If so, then I'm definitely guilty. There is no point in making a bad experience viable through implementation and theory. We have to start at the experience-level if our energies are going to be directed appropriately.

good lead, I'll work on it

I think I meant to say live coding, not programming; when your efforts are closely tied to what a programmer wrote, it's more apparent it's related to PL than a performance context in which (perhaps) no one ever glances at the code much. Your first point about shell languages is really good, so I'll give myself a homework problem on it, since clearly shell language design is tuned to punish casual error less readily than in C, say. (Though of course you can do frightful things in a shell.) As penance I'll add a comment about at least one effect I see in shell languages, after I think about it. I already have a practical use case in mind.

As a hypothesis, I'll try to make a case for specifying "what" in less detail at a high level, versus "how" in more detail at a low level, and see how much is accounted for by that. In languages with operator overloading, I sometimes add extra API that isn't concerned with how it gets done; for example, I often want to append immutable dataflows, and I want to be able to say append without caring how it gets done. That might or might not be related.

homework: food for thought

I reached an entertaining idea thinking about this, but it might not be closely related. (Sometimes thinking about topic X causes a lateral jump to almost-related topic Y, which is more fun to consider. I thought someone might like the new topic Y in this case.)

A first draft of this comment got big without getting anywhere near done. So I should only report questions I asked myself, without evidence-less rationalization which I'm sure you can supply as easily as me. However, without fluffy rationalization, questions themselves might not make any sense. But that's part of the point: the questions might not end up meaningful.

How is a shell language different from a low level language like C? What makes a shell more suitable for consumption by anyone watching over your shoulder? Which has more locality: a single command line that makes sense by itself, or a line of C code that requires you read hundreds of other lines of code for context? Are command line utils better documented than typical C library API? Is a shell language more like a high culture standard dialect in contrast to a low culture regional dialect only locals understand? Are shell command lines typically mostly declarative? Can you write long scary shell scripts if you really want? Can you document a C library in a really clear standard manner if you really want? Which punishes rapid casual experimentation more: shell or C syntax? Can we liken shell usage to the top of a pyramid while C usage corresponds to the bottom? If we look at menus in a GUI app, does this resemble the cleanly documented and advertised api of a shell language in some ways?

On a slightly related note, I reviewed a vague plan to have a C-to-C rewriter take as input a specification of things one wants to have happen during a code rewrite, including declarations of which things should have added to them support for tracing, logging, probing, profiling, and counting, etc, in ways permitting access at runtime. (For example, if it runs in a daemon, maybe it services http requests on some port to inspect state maintained about such matters.) The interesting part here is the difference between short declarations about what is wanted, versus code detail in two places: 1) compiler behavior, and 2) code inserted in the rewrite to get desired effects. The loose direction vs detailed implementation perspectives seem related to the shell vs C topic.

(Bear with me, this is related.) Some months ago I thought a lot about spawning green threads and processes, and it occurred to me sometimes you want to spawn and/or fork them programmatically — here, take this stack with these binary arguments and go nuts — and sometimes you want to launch them declaratively with command lines, so they act just like little embedded C apps. If something goes wrong, or if you merely want to maintain stats about the system, it can be hard to describe what green threads and processes are doing, if they don't have a command line even when you intend to ignore it (because you already know what to do). So it seemed clear you wanted a command line, even if you intended to ignore it, just so there was a summary of what threads and processes were doing, to provide a key for related stats and monitoring utils when they present info.

So I was thinking about shell languages, online help documentation, command lines, and green threads and green processes. As an example of bare bones online help for command line interfaces, I visualized how things used to work in MPW, the Macintosh Programmers Workshop, which was a weird kind of Unix style shell clone that ran as a single Mac process. (I used it heavily from about 1990 to 1995.) The builtin help tool just parsed a file in a particular kind of very primitive text format, in order to go fetch text describing what you wanted to know about. Around here, as I thought about MPW tools that ran like embedded C style apps, and I realized they were green threads in MPW, and that all along maybe I've been unconsciously intending to clone an app I used to love twenty years ago. (If you squint, there's similarity in there.)

[The funny part is that many Taligent folks thought command lines were pure evil, because the only way to do anything in the modern world was to use a graphical user interface. If that meant a user had to do everything repititious entirely by hand, or that you couldn't script complex test suites, then that was just too bad. This had something to do with rolling my own script language to drive my tests and simulations, because there was no support since it was politically wrong. In contrast, the OpenDoc folks were in league with AppleScript folks to drive using scripts, and this was in line with models of expected end user behavior.]

The interesting train of thought I reached, which might be hard to get without detail above, was a visualization of a worksheet-oriented client interface resembling MPW, with green-thread based tools running both locally and remotely in a daemon (actually, both the client and the server are peer daemons, and the client just happens to have a GUI too, that's all), whose text editing model seems like a puzzle fun to reason through. In MPW you could do cool things like treat a text selection as the source or destination of a computation, basically the stdin or stdout of a green-thread tool. I started thinking about undo systems in a UI, and the naming of text streams and slices, and the interaction with a model-view-controller setup with the daemon in another process, and this struck me as more interesting than the original question about shells being audience-friendly. So I'm off the audience angle for now.

Edit: Obviously I can't actually clone MPW per se, so my comment has to do with goals in general UI interaction as it pertains to flows of data under user control, and how this interacts with editing. If you google MPW, the first thing you see might not be most interesting. Wikipedia's report is short and sweet. More interesting is something like http://basalgangster.macgui.com/RetroMacComputing/The_Long_View/Entries/2010/4/3_MPW.html (Javascript required), written in 2010 by an author I can't easily identify. (The site has several tempting retrospective views on a lot of older Mac software.)

Dobbs/Allen on MPW

This post is off-topic in the sense it's only about MPW, which has no direct bearing on live programming. Its relevance comes only from the relationship between languages and dev environments, which might seem interesting here. Part of this might read like an attempt to explain past events in terms of the interests of specific persons, but that's not my intent; it's merely extra context. I'll keep my comments very brief.

I found a January 1st 1988 Dr. Dobb's article by Dan Allen, who was working on HyperCard at the time: http://www.drdobbs.com/architecture-and-design/the-macintosh-programmers-workshop/184408063. Unfortunately, it looks like unclosed html anchor tags cause a lot of underlined text, so you would have to overlook that to enjoy the read. (You can turn off page styles in your browser.) Obviously this didn't appear on the web at the time. If it helps with context, recall that Jobs was driven out by Sculley in 1985, and that HyperCard was babied by Sculley.

Under a heading about the MPW interface is this opening paragraph:

MPW is a mixture of Smalltalk and the Unix system.

That surprised me because no one ever mentioned Smalltalk when using MPW; but it might explain why a feature I liked most was present, noted in the next sentence:

From Smalltalk, it inherits an integrated environment as well as the ability to interpret any text when users select it and press the Enter key.

A description of each MPW tool that sounds like green-threads is this one:

An MPW tool is actually a coroutine that resides within the MPW shell's heap.

Here's an excerpt from the section on background, history, and credits, which I include to note the role of Rick Meyers, who is credited with starting Apple's Smalltalk-80 project in October 1980:

Development on MPW began late in 1984, when Apple engineer Rick Meyers was assigned to bring about a development environment, to suit Apple's internal requirements. [...]

As the design of the MPW shell progressed, a need for two different applications became apparent: a Unix-like command shell and a Mac-like mouse-based editor. Further, it became clear that these two applications needed to be tightly coupled. The solution was a combination shell/editor. Dan Smith wrote the shell and Jeff Parrish wrote the editor. Project leader Rick Meyers worked on the command interpreter.

Rick Meyers is featured in the Green Book ("Smalltalk-80: bits of history, words of advice"), where his research is described in one section. So it's interesting now to see influence on the way MPW worked.

Can you help me by

Can you help me by describing one especially concrete example of changing a PL so it better suits audience interaction?

Sure. Here's a few: Design a module system that ensures incremental compilation (at least get rid of #include). Design syntax with an incremental, probabilistic grammar to support caching, error-tolerance, and recovery. Avoid recursion and while loops in semantics: develop a syntax that supports a flat layout or simple drill-down without hiding values or structure (foreach loops are okay). Support smooth, tangible, visual syntax for continuous values to avoid discrete issues with digit entry (cf Reactable). Consider forcing use of state models based on physics, integrals, differentials to achieve greater smoothness in temporal transitions. Eliminate use of 'new' state, which can easily grow detached from the source code (consider using filesystem-like idioms instead). Perhaps lower the glue-code overhead with some auto-wiring model (maybe based on proximity).

Looking at an iceberg, the bouyancy of parts underwater as cause is more interesting than parts above water as effect.

Hmm. The 'effect' of an iceberg is a lot bigger than what appears on the surface. You'll notice this the moment you encounter one. :D

The same is true for language designs. There are a lot of 'effects' that are invisible because they occur in an "opportunity" space - i.e. the glass ceilings, boiler-plate, performance overheads that frustrate and undermine efforts. As one non-obvious example: the prevalence of 'OK' dialogs is ultimately related to the difficulty of preview/undo/abort for procedure calls, and thus procedural languages have a pervasive effect on UI design.

I find these hidden, non-obvious effects very interesting. Studying them has greatly shaped my language design thoughts and efforts.

Raising the invisible ceiling, lowering invisible barriers, doing invisible work that won't be appreciated by people who only see with their eyes - that's the role of a good PL designer.

watchmen

Thanks for a few specific items. Where is the audience in each case? Or just pick one. In the context of live coding (where this started) an audience consists of folks watching a sort of performance, or passive observers watching an active doer. If I write code and keep working with it, I'm not the audience, and neither are coworkers who are also active doers. Without an element of watching but not doing, the concept of audience doesn't seem applicable, because then it just means "people are present somewhere in the process." But since that's always true, it's another meaningless distinction. I'm sure you get the point; is live coding just a synonym for dynamic or interactive?

The significance of the audience

The significance of the audience is felt indirectly.

Because an audience exists, the program cannot fail or glitch during edits. If no audience exists, then intermediate failure states would have a lower cost to the programmer. Because an audience exists, the programmer must be able to move with the great ease from thought to action. If no audience exists, then few panels extra navigation latency has a much lower cost to the programmer.

Such costs matter because we are not always so clever as to avoid design tradeoffs. Some of the design choices mentioned above already imply tradeoffs, e.g. between expressiveness of loops vs. navigation latencies. Such tradeoffs might be less acceptable if our purpose is the program, rather than the audience.

The terms in use are Live

The terms in use are Live coding and live programming. They have been used as synonyms for ten years.

Others are used by specific languages, for example chuck uses on the fly programming.

There is no co - opting going on.

The co-opting happened ten

The co-opting happened ten years ago, so it's okay?

No, I'm arguing against

No, I'm arguing against co-opting! We're all talking about liveness in programming, using the same references, etc. We're all trying to work out what live programming means, and arrive at meaningful shared terminology.

Hopefully forums such as LIVE 2013, live.code.festival, the collaboration and learning through live coding Dagstuhl seminar, journal special issues etc will facilitate this discussion and help us arrive at agreed terminology. Perhaps the cognitive dimensions of notation is a good starting point.

This is important because live programming is a fundamentally interdisciplinary topic, and agreed terminology is essential in interdisciplinary research.

I genuinely don't understand the divisiveness here.

More like 'fait accompli'

He means it's too late to close the barn door once the cattle are already out.

Strictly visual in his mind?

For me, live programming is achieved when the program is running and you can change it without abandoning data (state) or forcing it to shut down. I don't get the insistence on there being some "visual" representation of program.

Example: I have a Lisp program that's running, and I can see its output in realtime - say in a log file where a line is written every half-second, that I'm following in a shell window with "tail -f" Now I open a REPL, attach it to the running program, and redefine a function that the program is using. The program continues to run while I do this, and when the new function definition goes into effect I see the output change. Purely textual program, purely textual output, but I consider this to be "live programming." The output feedback is "visual" (in some way that pretends text is not actually perceived via vision) iff the program itself is doing something visual, but not if it isn't.

I've done similar-ish things in C, for that matter: A genetic algorithm I wrote to find a good AI player for a game a friend of mine came up with had a "live file" of parameters and real time log output (about a line per second) of contest resolutions and finally a graphical display of the board for one (at a time) of the games that it was playing -- and I was changing parameters in the live file and seeing the effect in the various kinds of output as the program read that file and updated itself every two seconds or so. It turned out the graphic display was almost useless at realtime; it went too fast to even see whether the players were following the rules. It was good for seeing how the board evolved during the game, but you couldn't evaluate the individual moves that way. Instead I eventually changed it to get image captures to see the individual moves made, and traversed them forward and backward looking for stupidities to correct.

Anyway, my point is I think the best "feedback" in a live programming environment is the feedback that the program itself gives you because it runs code which, among other things, explicitly tells it to update your display and how, or explicitly tells it to create documents you can quickly see and evaluate. Programs are too diverse IMO to attach some UI graphics to them and expect that to be meaningful for more than a tiny class.

That said, there are certainly a lot of libraries for various kinds of feedback display that could reasonably be written and become part of a language's standard. "Live programming" is based on feedback provided by calls to output things; it can be supported by a rich and diverse library of code for updating displays.

Lets bikeshed

If we assume for a moment that the term "live programming" has been lost to mean the same as "live coding" (i.e. programming for an audience), can we come up with a great alternative name for "live programming" (i.e. programming with immediate feedback)?

Your demonstrations are great but very fast btw. If I didn't already know how the system works then I don't think I'd be able to fully follow what's going on! Is there a download available? I'd love to play with it.

FDC, FDP, FFP

  1. Feedback Driven Coding
  2. Feedback Driven Programming
  3. Feedback First Programming

Personally I like the third one best because it highlights the unusual turnaround between code and its application.

Naming

The real estate for good names is quite small, since the names have to be simple and catchy. If we follow the XXX programming format, XXX has to be a single word that is commonly used and has meaning to most people. It should also be an adjective to avoid to avoid the aux. term (oriented, driven, etc...). And the need for acronym is bad early on, you want people to say your term, not your acronym.

Let's take for example "functional-reactive programming," the name only means something to us since we've been exposed to it for a long time, but actually its not a very one and needs an acronym (FRP). "Aspect-oriented programming" as a similar problem. We could argue that object-oriented programming is a bad name, but its been with us for so long we wouldn't see that. Functional and logic programming are good names. By this standard, good names could be:

  • Alive programming
  • Living programming
  • Lively programming
  • Responsive programming
  • Active programming
  • Vivid programming
  • Observable programming
  • Animated programming
  • Lucid programming
  • Interactive programming

Note that "live programming" is perfectly valid for both mine and Alex's work. One sense means "programming with live feedback", the other sense means "programming in the live", this is quite unavoidable because any simple term will have more than one interpretation. All we can do is (a) explain what we mean each time (b) rely on eventual adoption of our meaning (or not).

Its quite too late for me to choose a new term. The people in the PL/SE community who have been exposed to the concept have already adopted something close to the meaning I use. Just check out the papers accepted to the live programming workshop; all entries from PL/SE are close to my meaning; all entries from the live coding community lay on the live coding interpretation. It will be interesting if the notice and how they resolve the contradiction, but alas, I won't be there.

This is just the name that we are going to have to use. There will be confusion and misunderstandings, people will talk past each other, but life will go on. Hopefully, we'll come to some kind of understanding with the live coding community, it really is in our interest to make sure the messages are not confusing.

Meaning hasn't been lost.

Meaning hasn't been lost. Live coding in front of an audience is a specific kind of live programming. Its existence doesn't stop us talking about live programming in general, or in other specific kinds of programming, such as live off-line debugging.

I'm not going to give up the

I'm not going to give up the name. Its too late now anyways as the concepts have been intertwined by the community; all I can hope to accomplish is to separate live feedback from programming in the live.

The videos are not designed to be self contained. They are not even sprung on the reader until Section 4, where the reader should have enough context to get them. I need to do a better video eventually that is self contained.

I'm not ready to release anything yet. I know I should, but these systems are so fragile :p

Unfortunately conflated with

Unfortunately conflated with online books about programming, but "Online Programming", derived from the concept of "Online" algorithms, might be a good fit. The whole programming environment is an online algorithm of sorts, where the computed result is itself a program.

In case anyone doubts that

In case anyone doubts that live coding and live programming have been used interchangeably for some years, here's a software art paper from 2004:
http://art.runme.org/1107861145-2780-0/livecoding.pdf

Early days, but in it I describe the various feedback loops at play (I counted four), and how important they all are. I also (rather vaguely) describe feedback.pl, which I came to use in a similar way to the OP - inserting live trace values into the text of the code, to "visualise" its operation.

It also includes a lovely report from Julian Rohrhuber which describes live coding/programming which is not in front of an audience:

"I would say that doing live programming alone, I am the audience and programmer in one, which is not a trivial unity, but a quite heterogeneous one, in which the language, my expectations, my perceptions, errors and my poetic and/or programming style play their
own roles."

One might similarly argue

One might similarly argue that 'lava' and 'magma' have been used interchangeably for years. I don't doubt that it's happened, but I also don't feel that it's right.

I don't think this analogy

I don't think this analogy works. We're not discussing the different meanings of "coding" and "programming", but the many different interrelated connotations of "live".

Early days, but in it I

Early days, but in it I describe the various feedback loops at play (I counted four), and how important they all are. I also (rather vaguely) describe feedback.pl, which I came to use in a similar way to the OP - inserting live trace values into the text of the code, to "visualise" its operation.

The nice, and perhaps the evil thing, about live programming is that it has more than one meaning: programming in the live and programming with live feedback. It is possible to have both at the same time, or each in separation. E.g.

  • I program live in front of an audience, but the feedback the audience gets is not that life with respect to myself; perhaps I'm just using REPL (or to say; liveness level 2!). Heck, I don't even need an interpreter if it is just code demonstrations (but for music, you need at least a REPL, I would guess).
  • I get live feedback about my program alone in my office. It helps me program better, but I'm not programming in the live, there is no audience, my program is quite normal.
  • I'm programming live in the ephemeral sense and I'm also getting live feedback. Say I'm building a composition in Quartz Composer in front of a bunch of people (As a data-flow VPL, Quartz Composer is level 4 I think).

I would argue that programming with live feedback IS NOT a special case of programming in the live (the programmer is the audience of his own code). The artifacts and purposes are different, even the technology is different: programming in the live could and does often happen with basic (liveness level 2) REPLs that we (in the live feedback community) do not consider very live (its a spectrum).

But its also not surprising why you would go into "live feedback;" it is a nice complement of the "in live" work, but there are many of us into "live feedback" and not into "in live," and they are naturally separable concepts. It is also quite easy for us to get confused if we don't realize the concepts are separable.

Indeed, live programming is

Indeed, live programming is a rich topic. We can indeed study different aspects in isolation for particular domains (including off-line debuggers), but that doesn't mean that these aspects aren't fundamentally interrelated.

That is why you need to be clearer about what you're talking about.

If anyone is trying to co-opt "live programming" it's you -- trying to limit its scope after the fact. In my opinion, there is no historical basis for doing so, you say that Hancock's thesis supports you, but it simply does not. You just misunderstood what he meant by "steady frame". It doesn't mean steady as in frozen time, but steady as continuously meaningful.

Liveness implicates live interaction.

I guess for the next 10

I guess for the next 10 years one will have to explain why REPLs or live coding sessions are not "live programming" or live programming on level 2 according to a classification of some academic guy on which no one seems to agree.

Anyway, the first "live programming" product which goes mainstream will make a clean sweep and then "live programming" will be associated with whatever term is used.

Fair point.

Fair point.

I'm not going around

I'm not going around co-opting other people's work into your niche area and then complaining when they don't get it.

Liveness implicates live interaction.

No, no, no, liveness means progress is made in a concurrent system, or it means your feedback is live, or it means live interaction. I'm sure it means more than this! Take your pick but don't conflate meanings because terms have multiple of them.

I'm done arguing about this, I have too much code to write.

That's not co-opting, that's

That's not co-opting, that's pointing out prior art in code timeline scrubbing, tangible values, auto-completion, and the manipulation of history. I admit that in that post I was asserting a richer usage of the phrase "live coding" than was underlying Victor's objection to Khan academy.

I used the word "implicates" (and not "means") advisedly. Liveness implicates live interaction in the field of live programming.

Lets sharpen our tools, and sure, define these terms in more useful ways. Ordering me to stop using the term "live programming" is deeply patronising, though.

That's not co-opting, that's

That's not co-opting, that's pointing out prior art in code timeline scrubbing, tangible values, auto-completion, and the manipulation of history.

You still don't get it! Bret's essay wasn't on techniques, it was on solving problems (related to learnable programming). You guys used the same techniques to solve different problems, great! And you guys didn't invent them either, so there is no need to gripe about it.

Liveness implicates live interaction in the field of live programming.

Self referential definition! Difficult to argue with that one. But no, you've conflated two different concepts, live feedback and live interaction, when in reality neither implies the other! They can co-exist, but they can just as easily not.

Lets sharpen our tools, and sure, define these terms in more useful ways. Ordering me to stop using the term "live programming" is deeply patronising, though.

I would just say the same thing: you are ordering me not to use the term; maybe this underlies are misunderstanding, each of thinks the other is being territorial, and maybe we both are, but whatever.

You are welcome to apply 6 or 7 different labels to your field and grab more as you want; it really is your brand dilution problem, not mine. At the end of the day, all these terms are just bdefined arbitrarily...they are brands with no intrinsic meaning (nominal vs. structural typing).

Stop telling me what my

Stop telling me what my research is, it's irritating.

Regarding learnable programming, I'm co-organising an international seminar on "collaboration and learning through live coding". I'm also actively collaborating on this topic with colleagues in education research. Last year I gave a conference presentation called "live coding in computing education" at iFIMPAC. So, I certainly am interested in solving problems in the area of learnable programming, although think we need to look outside of the context of Piaget and the lone programmer to do that.

Regarding live programming interfaces in the manipulation loop, in 2004 I wrote an article "hacking perl in nightclubs" introducing a live programming interface and talked about the usefulness of live feedback in sourcecode interfaces.

I am absolutely not ordering you not to use the term "live programming", that would be stupid. I don't own the term, and I think you are doing fine live programming research.

I absolutely do not conflate the different concepts of live manipulation feedback and live interaction. I already linked to a 2004 article discussing these different live feedback loops. I illustrated and discussed the related programmercode feedback and programmer->code->environment->programmer feedback loops in my PhD thesis, although in honesty I think Nash has done it better.

Reducing the discussion to branding and marketing is trivialising a rich, interdisciplinary area of research.

Stop telling me what my

Stop telling me what my research is, it's irritating.

OK. But you've are quite clear about what your positions are. Why can't I bring them up?

So, I certainly am interested in solving problems in the area of learnable programming, although think we need to look outside of the context of Piaget and the lone programmer to do that.

If you guys had tackled the same problems in the past with some public output, then gripes against his essay could have been justified, but you don't get the high ground for being interested.

Regarding live programming interfaces in the manipulation loop, in 2004 I wrote an article "hacking perl in nightclubs" introducing a live programming interface and talked about the usefulness of live feedback in sourcecode interfaces.

Yes, I believe this is an example of live feedback without live interaction (the program is not interactive). I'm not sure if you are disagreeing with me or not.

I am absolutely not ordering you not to use the term "live programming", that would be stupid. I don't own the term, and I think you are doing fine live programming research.

Cool, I can live with this. I'd rather argue about specifics, like when live interaction is useful and when it is not, than framing issues.

Reducing the discussion to branding and marketing is trivialising a rich, interdisciplinary area of research.

If we have different ideas about what a potato means, then we cannot have meaningful conversations. Likewise, if you have many different names besides potato, then you've diluted the ability of someone to understand what you mean. Choose too many, and eventually, they just assume any word you say means your potato. Again, the ability to have meaningful conversations is degraded.

Oh I'm happy to enjoy

Oh I'm happy to enjoy discussions where my positions are challenged, but I am fed up with the assertion that my research is not concerned with live programming. I hope I have now demonstrated that it is, though.

FWIW, I think any gripes in the TOPLAP post and comments were nicely rounded off by Dave at the end.

Feedback.pl has live i/o so is technically level 4. For me its particularly interesting aspects are its level 3-style programmercode interaction though, so we are on the same page there.

I agree meaningful cross-/inter-disciplinary conversations do require shared terminology, and am keen to work towards that.

Oh I'm happy to enjoy

Oh I'm happy to enjoy discussions where my positions are challenged, but I am fed up with the assertion that my research is not concerned with live programming. I hope I have now demonstrated that it is, though.

Maybe you can clarify in future publications, but don't gripe about someone not getting your position retroactively. You can either point to related work, or not in a formal setting. Obviously interactive informal debates are quite different.

Feedback.pl has live i/o so is technically level 4. For me its particularly interesting aspects are its level 3-style programmercode interaction though, so we are on the same page there.

We quite disagree on how REPLs fit into the liveness spectrum. To me, the important questions are: what if you add new state to a program in feedback.pl? What if you modify how state is manipulated, can you retroactively update the state to be consistent? Many of these questions don't make much sense in an interaction context but do make sense in a feedback context. On the other hand, this is a much better argument to have than the one we are having now.

Interaction between the programmer and what?

I'm finding most of the "clarifications" in this thread (on both sides) to be more obfuscating than revealing.

It seems to me that the main two camps here are focusing on different ideas of what the programmer is interacting with, and both are usually using the word "interactive" in an unmodified way that doesn't elucidate the distinction.

If you use "live programming" to mean programming as a spectator sport, you mean that the programmer's "live interaction" is with a live audience.

Conversely, if you mean "tight I/O feedback loops between the programmer and a running program, supported by a programming system that facilitates it" then you mean that the programmer's "live interaction" is realtime interaction with the programming system.

I personally don't care much about programming as a spectator sport. It's an interesting idea, but I consider it to be an extraordinarily small niche unlikely to produce any very interesting code other than the programming systems suited to support it -- which themselves, will be too interesting (and too intricate) to be produced by programming for the entertainment of a live audience. Instead, I consider live audiences likely to be entertained only to the extent that the "programmer" produces graphics, audio, and multimedia, to the exclusion of code that does anything I consider interesting. If that's the working definition of "live programming" I'll be one of the people who just tunes out future discussions as soon as I hear the buzzword.

Contrarily, the idea of "live programming" meaning "tight realtime I/O feedback loops between the programmer and the program assisting the programmer in getting something interesting done" is VERY interesting to me, in terms of UI design and productivity enhancement of programming systems. If that's the working definition of "live programming" then I'll be one of the people who perks up their ears at the buzzword and follows the conversations about it (and examples of it working or of attempts at it not working) avidly.

My problem is that considering one of these topics highly relevant and one of them highly irrelevant, I really have a problem when they are using the exact same buzzword because the simple presence of the buzzword then gives me no clue as to whether I'd rather tune out or perk up and pay attention.

It so happens that programming for a live audience, which I consider irrelevant, is facilitated by tight realtime interaction with running programs, which I consider very interesting. But that's a secondary consideration at best, and not necessarily related to interesting system UI design because, like most programming techniques, it can be done in many different ways and on systems which one wouldn't expect to support it very well.

Live interaction

Thanks Ray, this is insightful.

Firstly, I'd like to point out that "tight realtime I/O feedback loops between the programmer and the program assisting the programmer in getting something interesting done" is very interesting to anyone performing in front of an audience.

But actually, I think the "live audience" connotation is unfortunate, as it seems to be what many people associate with the word "live", but it obscures a far deeper meaning of what live programming is or could be. Live coding is much more than standing up and programming in front of an audience (FWIW, I usually perform *within* an audience, using a wireless keyboard). I try to describe things in terms of live electricity, which might help a little, but not much. This is more of a problem for those working in live performance, but seems to be causing annoyance elsewhere too.

The audience perception of code in live coding performance is little understood, but despite what the TOPLAP manifesto says, I think it is unimportant. I'd like people to enjoy (and dance to) the music, and ignore the code, to the extent that they'd ignore the exact finger positions of a guitarist (although would complain if they couldn't see the guitar).

Anyway, I think we're settling on discussing live programming languages and their UIs, as "live programming", and the use of live programming in performance as "live coding". So hopefully when you see the phrase "live programming" it is likely to be of interest.

When you see the phrase "live coding" it is more likely to be wrapped up in issues particular to live performance; programming under pressure, media aesthetics, politics of expression, code and the body, creativity, multi-user programming, perception of code etc. If these phrases turn you off you are probably safest to look away.

There is a spectrum of possibility beyond professional software engineering and end-user music performance UIs though - and I think that is where the really rich live programming domains could be. So watch out for folks transforming the boundaries.

Anyway, I think we're

Anyway, I think we're settling on discussing live programming languages and their UIs, as "live programming", and the use of live programming in performance as "live coding". So hopefully when you see the phrase "live programming" it is likely to be of interest.

I would be happy with this, but what has been written so far doesn't leave any room for that interpretation; the wiki article:

Live coding[1] (sometimes referred to as 'on-the-fly programming',[2] 'just in time programming', 'live programming') is a programming practice centred upon the use of improvised interactive programming. Live coding is often used to create sound and image based digital media, and is particularly prevalent in computer music, combining algorithmic composition with improvisation.[3] Typically, the process of writing is made visible by projecting the computer screen in the audience space, with ways of visualising the code an area of active research.[4] There are also approaches to human live coding in improvised dance.[5] Live coding techniques are also employed outside of performance, such as in producing sound for film[6] or audio/visual work for interactive art installations.[7]

Considering that wikipedia is the first place people go for information, this might shed some light why I'm a bit defensive (I believe you are the main article editor, please correct me if I'm wrong). I'm not going to edit wikipedia myself since that would constitute original research, but I would prefer if live programming disappeared from wikipedia since I don't think its use is clear or notable enough to be in there. In contrast, live coding has a very clear definition and a rich history.

The problem is that live

The problem is that live coding has been referred to as live programming, but to show good faith, I've removed it for now until it can be clarified (I won't have time to think about this for a week or so).

(I actually think that there

(I actually think that there might well already be enough references to establish the notability of a page dedicated to live programming languages, and the LIVE2013 proceedings would make this easier still).

*delete*

misplaced

Someone wrote this

Someone wrote this up:

http://www.techworld.com.au/article/459054/taking_pain_debugging_live_programming/

Slashdot picked up this article:

http://developers.slashdot.org/story/13/04/15/1156241/taking-the-pain-out-of-debugging-with-live-programming

tl;dr its just a REPL or Smalltalk reinvented! As if anyone reads anymore.

No surprises from slashdot,

No surprises from slashdot, but I don't really blame them too much for not "getting it" from this article, as it doesn't sell it in the first few paragraphs. Probing with @ is an ultra-convenient way of instrumenting code, tracing is a standard technique, and hotswapping is standard in debuggers.

How could you sell what was really new about this in a paragraph?

Tracing is just

Tracing is just programmer-oriented UI, here they are print statements, I will work on getting some graphical tracing in the next prototype so people will get it more. Correct that this isn't very interesting in itself, which is why I frontload my discussion on probing.

But the key innovation of this work is UI navigability to code with execution context:

This paper describes how live feedback can be woven into the editor by making places in code execution, not just in code, navigable so evaluation results can be probed directly within the code editor. A pane aside the editor also traces execution with entries that are similarly navigable, enabling quick problem diagnosis.

The problem with this abstract is that it doesn't really use any metaphors to get the point across. Imagine my "program execution" was expanded out into a 2D map; since we are dealing with batch programs for now we can ignore time (though computation also takes time, not relevant).

So every program execution could be viewed as a giant list of instructions that executed or, better yet, a giant tree of method calls, loop iterations, and so on. Now, you can "find" your problems simply by looking at this map!

But there is a big problem with this map. First, its just too detailed! My problems are there somewhere, but where? The truth is, if I see everything, I'm just overwhelmed by the massive amount of electrons thrown at me. So I need a way to zoom in on places in the map and reveal or hide information as needed. So first, I can associate some execution context for a loop iteration or method call on code for the method or loop body. This is meaningless by itself, but probing exploits that association by then revealing evaluation results! So now I can navigate through parts of the map and see bits and pieces of it.

But that's still not good enough. I can navigate through the map very slowing by jumping from method calls to method bodies associated with execution context, but for a big execution, this is going to a pain in the ass! So tracing comes in: it allows me to "summarize" the execution in whatever way I want through (for now) print statements! But that's not all: each line of output in the trace view is navigable to code + execution context! So I can click on the line, go to the code that generated the line along with the execution context around that, and probe/navigate from there. No need to start at the top!

Thanks, the navigation /

Thanks, the navigation / mapping metaphor does make this easier.

(warning, armchair psychology coming up)

I guess the problem with explaining this stuff is that programmers have fixed ideas how things must work, so you have to break that down before describing the actual thing you're doing. But then they don't see the thing, but just the breakdown of fixed ideas underneath them, get vertigo and feel anger that they have to direct somewhere. So having seen the breakdown of 40 years worth of fixed ideas, they channel their anger at being exposed to a 40 year-old idea. But that's only because they're not seeing the leap forward beyond that.

It's worse describing it to non-programmers, where you first have to explain the current non-live way of doing things, before describing how you're moving towards something that is more live.

So yes maybe it is best to ignore all that and just focus on navigation, mapping etc which takes liveness as a given.

Its not the message I think,

Its not the message I think, but the delivery mechanism.

The problem is that most people don't want to read conference-style papers; they've been burned by too much crap before and just don't think it would be worth their time. Rather, they'll watch the video a bit, maybe if they are 10 years younger than us they will even watch a talk on YouTube (WTF, that takes time!).

So what we need to do is get creative and maybe start learning from Bret Victor. We have to give focused talks that can be posted online and have a chance of going viral. Or we can do web-based essay with embedded videos. Actually, my 2007 paper embeds videos! Just no one could see them because it required Acroread + Quicktime installed...and maybe conference-style papers are just not the best format for video embedding.

So this paper is really for you guys, not for the slashdot/ycombinator/reddit crowd. I will try to do a web-based embedded-video essay later, which would be designed to have broader appeal.

Yes true, although I think

Yes true, although I think you'll agree that underlying Victor's seemingly effortless presentation is a lot of work, using communication skills and design artistry which don't come easy.. At least not to me.

Also difficult

Also difficult for me! But we have to start trying. It's too bad there are no conferences that target mass communication of PL deals :)

Interactivity in papers are

Interactivity in papers is great. This goes for papers about UI related things such as this one, but even for papers on type systems it would be great to have an embedded interactive type checker.

One of the problems with papers for general consumption is that they contain so much boilerplate that is required for academia that is just not interesting for other people. A lengthy introduction, related work, proofs, elaborate sections describing the exact semantics are much less interesting than the meat of the work.

Accepted into onward.

Review excerpts with my own comments:

First set the stage! Define in the introduction that Live Programming is based on program re-execution! This is well done in the PLDI paper. From that perspective Self's reality kit and others are not following this definition so you are just confusing the reader for nothing. I think that "live programming" is really blurring the pictures and this is not good ethically.

I don't really get this comment very well. I've somehow re-defined live programming but this was not my intention: re-execution should just be a detail in the experience I want to provide. I think the reviewer meant "Alternate Reality Kit" not Self, and perhaps is confused with Morphic (it's all Randall Smith anyways).

Traces can be really boring/totally useless to navigate. Just saying that we can explore the graph...well is a bit not acknowledging that they are a real pain to deal with. Your idea of link them back to context and adding navigation facilities is nice, still.

This is true, I should probably play up some more interesting graph-like traces when I get that far in my implementation. The idea being that the trace should be programmer-defined map of the program's execution used for debugging.

More important, you should really discuss more the limits of the approach. Because you need to (a) be able to re-execute the program, this is often not possible and (b) have multiple versions of "objects (even in procedural code with struct and pointers you get the same problems" at different context levels (and not just dealing with stupid literals).

I should describe the implementation better in the revised version. Multiple object versions aren't necessary, at least in the batch case. It's simply straight deterministic replay.

"However, feedback provided by fix-and-continue is neither responsive nor continuous:" I found this a weak argument and also non scientific one. Responsive? What? continuous this is not the goal so why would it be a problem. Fix and continue do not match the definition of life programming as program re-execution. Period. No need to.

I wish it was this easy, but there are a lot people out there that don't get this point.

"but code edits could never change the past." with the same argument we could say that with fix and continue we can debug any program not just the ones that can be restarted and whose objects can be stored at any computational point.

Quite true. With the real-time version of timeful live programming, the same is possible. But in this case we just factor out some "view" code that can undergo live programming and some "model-event" code that cannot.

A reviewer who didn't get it:

Live execution is an intriguing idea that I find interesting and useful. The paper discusses the idea in many generalities. I would have found more concrete presentations of the concepts more compelling. The video links helped (but were difficult to type in and follow). Perhaps a narrative associated with the videos would help.

So overall this works seems like it is worthwhile and interesting. I'm having difficulty identifying a specific contribution that I can really latch on to.

For some reason, YouTube doesn't show the annotations at all times. Annoying. I need to get myself access to FinalCut, I guess. Also, I guess I'm implicitly targeting this paper at a subset of the entire PL audience, I have made the paper a bit inaccessible.

Finally:

The two basic techniques the paper proposes are probing and tracing. Here is my understanding of how this works. Probes are sort-of "inline print statements", in the sense that they show the current value assigned to variables (etc.). Probes only work well with a step-through execution model. Tracing is little more than traditional printf logging statements (I do not mean this in a negative fashion: I prefer printf debugging to any other extant technique). The problem with probes is that they are impractical: one doesn't want to click through 100 correct steps to find the point where the probes show failure. An interesting connection is then made: probes are automagically places where tracing statements are written. When a problem happens and a user clicks on part of a trace, execution is (somehow?) put into that position, and then the user can then step through execution looking at the probes for a detailed view of what's happening. In a sense, tracing is the "crude" mechanism for locating roughly where a problem is; probes are used to narrow down the problem. How exactly the "time-travel" element is implemented remains vague to me: Section 6 seems to say it's a hard problem without an obvious solution.

A couple of problems: (a) there is no time travel here; instead time is projected into the space of the code. I'll have to emphasize this more. And (b) the implementation is quite easy if the program is small enough to re-execute whole quickly OR you don't care about latency. Otherwise, we have an interesting challenge on our hand; memoization is only one piece to this puzzle.

Overall the paper is fairly easy to read. It's a little woolier and less precise overall than I might have liked, which makes the reader work a little harder than they should have. It's not that the paper is difficult to read, simply that sometimes after reading a paragraph I had to reread it to work out what the hard facts were. A light edit could easily distinguish the hard facts from the linking prose. There are a few typos littered around (particularly in the figures), which again are easily fixed.

I have probably erred too far into brevity :)

web essay

I put together a companion web essay with embedded videos for the paper.

The essay on

Ideas

Has anybody tried a variation on breakpoints: execution-begin-point and execution-endpoint + program-state imaging/restoration.

We write code a little bit at a time anyways, so this solves a host of problems. This would be a nice feature alongside traditional, more well-known methods for debugging programs.

Intellitrace works by

Intellitrace works by recording everything, but its not checkpointing. The problem with checkpointing systems is efficiency as well as re-execution that must be deterministic (or at least, match the non-determinism of the previous execution).