## Programming with Managed Time

To appear in Onward! 2014; abstract:

Most languages expose to programmers the computerâ€™s ability to update memory at any time. But just as many languages now manage memory to unburden us from properly freeing memory, they should also manage time to unburden us from properly ordering updates to memory. Change then becomes human comprehensible: writes are seen by all affected reads, easing many programming tasks related to initialization, reactivity, and concurrency, and so on. Perceptible steady feedback is also provided on how code edits affect program execution as live programming. We propose time management as a general language feature to relate prior work and guide research into this unexplored design space.

We introduce Glitch as a form of managed time that replays code for an appearance of simultaneous state updates, avoiding the need for manual order. The key to such replay reaching a consistent program state is the ability to reorder and rollback state updates as needed, restricting the imperative model but still being quite expressive. Glitch is fully live: program executions can be replayed in an IDE and are incrementally revised under arbitrary code changes.

Written with Jonathan Edwards and heavily inspired by his blog post; there might have been a few LtU posts on this subject in the past also.

Also, if reading a paper isn't your thing, check out the essay + videos companion.

## Comment viewing options

### Dynamic vs. static managed time

This work takes FRP in a very dynamic direction, going so far as rolling back imperative-ish updates in order to find a consistent update order. It looks pretty cool, but I worry about that huge level of dynamism. Do you have thoughts on restricting to some subset of expressions for which a static update traversal could be worked out? Do you have examples that you think would be hard for such an approach?

(And I'm using 'static' here somewhat loosely, since I want to include solutions that branch on run-time data, so long as the fact that it solves all constraints can be established statically.)

### This is not FRP, even if it

This is not FRP, even if it shares some similarities (simultaneous determinacy?). We don't find a consistent update order so much as we climb a hill with the an allowance for going down (roll back) before going up again. The fact that we don't do dataflow probably puts us faraway from what would be considered even a descendant. Glitch really is about making control flow beautiful rather than abandoning it.

Dynamism is expensive and performance is a problem. When used as a library, this isn't a big deal since you can pay for it when you need it, but in YinYang you have to pay for it whether you need it or not.

I'm going to work on performance next, and I have some ideas on how to use staging to accomplish that without sacrificing dynamism. I think it's obvious that live programming requires complete dynamism (since if your code is mutable, anything is!), and I see that as glitch's primary territory (since no other system even tries). Staging would be cheating of course (more static, but still dynamic), and the trick is to spend more time on what you know doesn't change often (e.g. Code).

### Some remarks

I just read the paper (rather quickly, unfortunately: I didn't take the time to properly understand the fine details of phases for example), below are some comments in reading order. Overall I found it quite nice to read and interesting -- I liked the Coherence work before.

It is not clear to me why event handlers only sees updates of the strictly-before states, and not the current state, as suggested by "including any performed by itself" in page 2. What would go wrong otherwise?

(After reading more) Maybe this distinction should only be brought up just before the "after" example.

I find the break control-flow really tricky. It is not clear to me why the widget.position = widget.position update would not be also reverted by the break -- while, if I understand correctly, the presence of the event handler on mouseUp is reverted.

Besides, the auto-update syntax for widget.position = widget.position seems too clever for its own good. I don't understand how (x = x) could not be a no-op. What about a freeze keyword, used as in freeze widget.position, to better express intent?

To try to understand break, I ended up rather naturally thinking about a construct for a named block that could be broken/reverted as a whole -- a more atomic notion than "break what's in the innermost 'after' block"

do follow:
| widget.position = pw + (MousePosition - pm)
| on widget.mouseUp:
| | widget.position = widget.position # freeze widget position
| | break follow


I wonder whether this do construct may be interesting on its own.

"A location in YinYang" > why introduce the language name so late?

"suppose d becomes phase" -> typo, becomes false

"time-reactive version where their past histories..." verb?

A representative piece of code extracted from the Glitch-library-using C# codebase would be nice.

### Good feedback, let me try

Good feedback, let me try and respond.

It is not clear to me why event handlers only sees updates of the strictly-before states, and not the current state, as suggested by "including any performed by itself" in page 2. What would go wrong otherwise?

If this wasn't the case, the event couldn't reason about the context in which the event occurred without the possibility of destroying this context itself (or having another event destroy it). If you think about it, it's really the only arrangement that could possibly work with simultaneous execution.

I find the break control-flow really tricky. It is not clear to me why the widget.position = widget.position update would not be also reverted by the break -- while, if I understand correctly, the presence of the event handler on mouseUp is reverted.

If w.mouseDown occurs at t0 and w.mouseUp occurs at t1, "break" limits the extent of the after block from t0 (when it starts) until t1 (when it stops). All of the after block's effects will be reverted in the epoch that begins after t1, but they still exist from t0 to t1. This includes uninstalling the mouseUp event handler so it doesn't trigger after t1, but the event that was already handled AT t1 exists just before "after t1." The event handler isn't committing suicide by calling break.

Besides, the auto-update syntax for widget.position = widget.position seems too clever for its own good. I don't understand how (x = x) could not be a no-op. What about a freeze keyword, used as in freeze widget.position, to better express intent?

The event handler is discrete, reading before and writing after it's occurrence, so x = x makes sense in the presence of a break that will otherwise stop a write after the occurrence. The problem with "freeze" is that no other semantics for an update are possible in an event handler! I toyed with the idea of combining on and after blocks, where freeze would make sense, BUT this was not only confusing, but unworkable from an implementation point of view (time leaks are a pain in the arse).

I wonder whether this do construct may be interesting on its own.

After blocks are only interesting because they have a starting point (after an event occurrence) as well a possible end point (after a later event occurrence). When does "do" begin? It just can't hang out there without a beginning, while in normal non handler land, you can always use plain if statements to guard when some code is done or not.

Break is used to stop something that was started discreetly. So popped open an window after an event? Well....how would you close it? Those are the kinds of questions break answers.

Labelled after blocks would allow for breaking out through multiple levels of after blocks, and there is nothing in glitch that would make that troublesome.

"A location in YinYang" why introduce the language name so late?

To keep the focus on Glitch (the runtime that enables managed time) and not the language that happens to be built on top of that.

A representative piece of code extracted from the Glitch-library-using C# codebase would be nice.

I tried that before but it was a very boring piece of a recursive descent parser that happened to do type checking at the same time (though I've almost entirely thrown off static typing, there are still some lookups being done).

### Two minor suggestions

I enjoyed your paper. It'll be interesting to see how far you can push this approach.

Couple of minor suggestions: you might want to change the font/layout for Figure 3, so that it doesn't look so much like Glitch (or YinYang) code. At least, if I've understood things correctly, this figure should be read as pseudocode in a traditional imperative language, not in a language with Glitch-like update semantics.

You might also consider citing self-adjusting computation (SAC), in particular the work on imperative SAC. It also uses dependency-tracking and replay in an effectful setting.

Good point about the task code, yes it is imperative (C#, but my python macros are so convenient and C# works badly as pseudo code).

I was debating whether to include SAC or not. I decided not to because it seems entirely performance oriented and doesn't really change the programming model much (take the normal model and add changeable inputs + memo). There was also a lot of incremental programming work and systems replay work that was tangentially related but similarly didn't fit into the paper's design theme. When I try to write more about the implementation (and get more serious about performance), it will be the right time to really go into that work. See the limitations of FRP thread for more discussion.

### The preliminary reviews look

The preliminary reviews look good!

### Managing memory vs managing time

I'be been thinking for a while about the comparison made here between memory management and time management. It does't quite ring true to me. Garbage collection gets its feel from the fact that the programmer doesn't (well, doesn't explicitly) deal with memory management at all, eliminating a major cross-cutting concern that would otherwise greatly complicate programming. This becomes possible because the program, in the normal conduct of its duties, already does stuff (namely, referencing one structure from another) from which the system can work out how to manage memory. The equivament for time relations would seem to be not specifying time at all, yet something in the normal conduct of the program allows the system to work out how to manage time. This however doesn't really work because time is often part of what the programmer wants to specify, and, separately from that, often part of how the programmer wants to describe what to do. This is why programming languages that try to explicitly omit time <cough>Haskell</cough> fail miserably. Now, it appears to me Glitch doesn't go to that extreme, leaving some explicit time. But that broad class of strategies is always going to wrestle with the issue of which aspects of time should be explicit and how best to express them; so it'll never have the same feel as something like garbage-collection where one (mostly) just doesn't think about it.

### I was with you up to the

I was with you up to the point where you characterized Haskell as trying to explicitly omit time. Monads can be seen as an attempt to make update (time) explicit. Monads aren't the ideal solution to all problems in this area, but if you're saying that attempting to model time explicitly is the problem, I'll take the other side of that position.

### Well, Haskell did try to

Well, Haskell did try to omit time, and it did fail miserably as demonstrated by its subsequent introduction of monads. I've heard Matthias Felleisen listened to a talk on the introduction of monads into Haskell and, during questions/comments after the talk, stood up and said "Thank you for declaring surrender."

Now, my assessment of Haskell's introduction of monads is a separate issue; I think that failed miserably too, but for different reasons. I think if you're going to make time explicit, monads are a terrible way to do it; whether there's any better way is yet another separate question. But in any case, any system that makes time explicit is not going to have the same sort of feel as garbage collection, which causes memory management to disappear into the background. Time seems uninterested in disappearing into the background.

### Well, Haskell did try to

Well, Haskell did try to omit time, and it did fail miserably as demonstrated by its subsequent introduction of monads

I think it was more of a "let's see how far we can get without imperative style" than a failed attempt at anything in particular. But I if had known that by "Haskell" you just meant "early Haskell with no encoding of imperative programming" then I wouldn't have posted :).

I think if you're going to make time explicit, monads are a terrible way to do it

I don't think monads are a particularly great way to do it, but personally think you can start with a language with implicit imperative time and move that notion into a single monad and be better off for it. So terrible maybe, but better than the corresponding implicit version.

But in any case, any system that makes time explicit is not going to have the same sort of feel as garbage collection

Yeah, I agree.

### I'm not so sure the opposite

I'm not so sure the opposite of "explicit time" should really be "implicit time". Perhaps "pervasive time", or "ubiquitous time"?

We think of Haskell-with-monads differently, I guess. I still think of monads as tacked onto Haskell as a sort of pretense — though it'd be hard to say whether the pretense is a language without time pretending to have time, or a language with prentending to be without. Either way, I don't like monads intruding whatsoever on the programmer's experience; if some theoretician somewhere wants to look at things in terms of monads, that's their problem, but as soon as the programmer gets involved I see something fundamentally wrong with the situation — and afaics the Haskell folks want the programmer to be the mathematician who looks at it that way.

### Glitch supports "on event"

Glitch supports "on event" like GC supports "new": some way of explicitly allocating time/space is still needed. We are also given ways of relating computations via code like we are given ways of relating state. What glitch does is automatically keep computations consistent (and de-allocates them if they aren't done anymore) with a changing timeline, and in that sense, it is very analogous to GC.

Also, Glitch omits "computer time" but not real time. The two entities are quite different: computer time arises because computation takes time to execute; if we imagine an infinitely fast computer, it would disappear. Real time does not however, the user still pushes a button at some point in real time. Even a batch program is said to execute between two pointe of real time in glitch (so consistency makes sense).

### i don't get the analogy

What glitch does is automatically keep computations consistent (and de-allocates them if they aren't done anymore) with a changing timeline, and in that sense, it is very analogous to GC.

to me that really reads like glitch is on top of a gc, using a gc-esque module underneath; not directly analogous to a gc. i am under the impression that gc's don't generally consume the amount of cpu cycles that i assume are burned by glitch (in a good way) keeping the computations consistent with the timeline?

### Well, if you want to go

Well, if you want to go there, glitch is more like reference counting as it traces writes and reads eagerly.

### Reviews

I thought it would be good to post the reviews while I'm working on the final version. Below:

onward14 Review #17A
Overall merit: A. Good paper, I will champion it

===== Paper summary =====

The paper puts forth the idea of "managed time": just like modern programming languages employ automatic memory management to free the programmer of the details of allocating and deallocating memory, so a language with managed time should free the programmer of having to reason about the order of updates to state. The authors present Glitch, a programming model (and associated language) as one possible model of managed time. Time in a Glitch program proceeds as a series of epochs, and any state updates made during a single epoch are considered to occur simultaneously. The Glitch runtime upholds this illusion by re-executing (replaying) program fragments (called "tasks") until all dependencies between updates are satisfied. The paper ends by surveying other approaches that similarly aim to manage time.

Points in favor:
- It describes a general research area and contains a clear and forceful message to go onward and explore this new area.

- The technical contribution, Glitch, is worked out in sufficient detail to see its potential, with lots of open issues still to be worked out.

- The state of the art is well-covered, and the experiences described in the paper show that the authors have a good grasp of existing tools and models, including their limitations.

- Well-written paper.

Points against:

- Section 3 (describing the Glitch implementation) is hard to follow and could definitely benefit from some Figures that depict program timelines, in particular the breakdown of time epochs into stages, and the breakdown of tasks into segments. I'm thinking of a visual notation such as the diagrams used in the work on Concurrent Revisions.

Another way of saying this: Figure 3 shows me the statics while I would really like to gain a better insight into the dynamics of your update process on some concrete examples.

- While Glitch clearly has its own merits compared to FRP systems, the comparison with FRP could benefit from some concrete examples that show how a particular program is expressed in Glitch versus FRP. This would add more weight to statements such as the fact that Glitch programs, being closer to traditional imperative programs, would be easier to debug and have better support for procedural abstraction. Being familiar with FRP languages, these benefits of Glitch over FRP were not immediately apparent to me from your description.

- Delta w.r.t. some related work can be improved. For instance, it wasnâ€™t crystal-clear to me what the salient differences are between Glitch and Coherence. Also, you seem to have overlooked Trellis? (see below).

- Abstract:

After reading just the abstract, it's unclear what Glitch is precisely: is it a model, is it a running system ("Glitch is fully live")? Later it becomes clear that Glitch is a model, like e.g. Software Transactional Memory, with YinYang being a concrete language making use of the model.

- Section 2:

The Fahrenheit/Celcius example: what happens if a cell, e.g. _value is not assigned a value yet? In the example, the call to SetFahrenheit causes the cell to be properly initialized. But if this line were omitted, would the expression tem.Celcius raise an exception? Or is a cell initialized with a special \bottom value such that any operation
applied to \bottom yields \bottom, causing the example to run fine?

"Procedural abstraction of state and their udpates is a significant difference between Glitch and dataflow-centric approaches like FRP" Please elaborate? Perhaps we mean something else by "procedural abstraction"

I understand this as the ability to put code in a function and to replace the original code by a function call. FRP languages support
procedural abstraction in this way.

"Bags [...] are [...] unordered set-like collections" I thought bags were allowed to have duplicates, unlike sets. In what way are they set-like then?

"The statements in the body of a trait [...] execute simultaneously with the constructor that extends the trait"

This is clever. It seems to avoid the issues with prematurely exposing an "uninitialized" object that can occur when passing "this" as a parameter to other methods inside Java constructors.

page 3, Particle example. Three comments:

- It took me a while to realize that position is a 2D tuple while the code seems to be manipulating a plain number. I guess in your language n * (x,y) = (nx, ny), transparently.

- What's the initial position of the particles? Is the cell position initialized to an appropriate default such as (0,0)? In the YouTube demo, I saw calls to a.initP = (2,2).

- on Tick: fire a.step; fire b.step => are a.step and b.step fired as part of the *same* epoch or does each call to "fire" generate its *own* epoch?

Figure 3: is "updates" a global variable or an instance variable to be defined by whatever class imports the Task trait? If global, it would be nice to declare and initialize it at the top.

p.4, the discussion about how the breakdown of a program into tasks affects performance reminded me of the performance implications of transaction boundaries in Software Transactional Memory: smaller transactions lead to less chances of conflict, but introduce more management overhead.

p.4, "Some state updates like cell assignment are made commutative through do-once semantics that are enforced dynamically"
I did not fully understand what you mean by "do-once semantics". I *think* that what you are saying is that the first update to a cell during an epoch "wins" and is remembered, while any subsequent updates to the same cell in the same epoch are treated as runtime errors.

p.4, "[...] otherwise attribution would be non-deterministic"
Unless you attribute the error to *all* conflicting locations. If two locations both update the same cell, from a UX perspective, it may be useful to flag both locations. There doesn't seem to be a good reason for why the first update in the arbitrary total ordering would be ok and subsequent ones would be erroneous.

p.5, "Change as Ripples in a Pond"
"meaning change is [...] uniform"
I did not understand what it means for change to be uniform. Does it mean that given two cells x and y, the chance that any of them will be updated is the same? Does it mean that given a cell x, the rate of change of that cell is roughly constant?

p.6, "Many classsic algorithms depend upon multiple assignment"
The term "multiple assignment" threw me off, as I thought you meant syntax of the form:

x, y = 1, 2

Presumably, you instead meant multiple assignments to the same variable
*during the same epoch*, e.g. x = 1; x = 2;
Perhaps this can be stated more explicitly.

p.6: the use of a value that means "not yet" to implement asynchronous operations is intriguing. However, it is not clear from the text how client code should deal with such a value, so it becomes hard to compare against code that would use futures or callbacks. For instance, must code "guard" against such a value, by writing:

if data != NotYet:
# put code that needs data here

Or is a "not yet" value something like the \bottom value I mentioned earlier, causing any operation that uses it to return "not yet" itself, so that the entire program computes "not yet" until the data changes?

Section 5 on related work:
"CALM consistency": perhaps you should explain what CALM stands for (consistency as logical monotonicity).

p.8 Virtual Time:
Re. the analogy between managed memory and time: Dan Grossman's 2007 essay on "The Transactional Memory / Garbage Collection Analogy" makes many of the same points. He compares STM to memory management, which, like Glitch, is all about coordinating state updates.

Another related piece of work which makes a very similar analogy is Trellis, a library for Python:

https://pypi.python.org/pypi/Trellis

From that website:
"Unfortunately, explicit callback management is to event-driven programming what explicit memory management is to most other kinds of programming: a tedious hassle and a significant source of unnecessary bugs. [...] The Trellis solves all of these problems by introducing automatic callback management, in much the same way that Python does automatic memory management."

Not only does Trellis have shared goals, I see many similarities between Glitch and Trellis. For instance, Trellis offers the same illusion of â€œsimultaneousâ€ updates, also allows cycles between cells and also tracks dependencies between cells at runtime. Even though I donâ€™t have the impression that Trellis is a widely used or known library, given these similarities, I believe adding a comparison between Glitch and Trellis is a must.

Editorial issues, spelling, typos:

p.4 : "Commutativity is a strong restriction: [...] while operations like adding to a bag are {NOT?} commutative if remove operations are not supported"

p.5 : "Now suppose d becomes {false}"

Writing quality: 4. Well-written

===========================================================================
onward14 Review #17B
Updated 5 May 2014 12:09:52pm EDT
---------------------------------------------------------------------------
Paper #17: Programming with Managed Time
---------------------------------------------------------------------------

Overall merit: B. OK paper, but I will not champion
it
Reviewer expertise: 2. I have passing familiarity with
this area

===== Paper summary =====

This paper presents an approach aimed at "managing time": state updates are triggered apparently simultaneously, and recorded so that states can be reversed.

This paper is decidedly Onward-y. It is provocative in places and unclear in others. The writing is alternately over the top ("bleak experience", "inhumanly complex") and impenetrable. The authors should clearly delineate their contributions over previous work early in the paper; the differences between this work and FRP, FlapJax, and FrTime are not explained as clearly as they should be. It is unclear in the extreme that this can ever work - for example, it seems like space consumption must necessarily grow more or less without bound, and that replay will unavoidably impose cripplingly high overhead. That said, I'd like to see this paper in the program.

Writing quality: 2. Needs improvement

===========================================================================
onward14 Review #17C
Updated 5 May 2014 6:54:54pm EDT
---------------------------------------------------------------------------
Paper #17: Programming with Managed Time
---------------------------------------------------------------------------

Overall merit: A. Good paper, I will champion it
Reviewer expertise: 3. I know the material, but am not an
expert

===== Paper summary =====

This paper introduces Glitch, a new design point in language support for managed time. The goal is to retain much of the imperative programming model; the underlying mechanism is via replay to provide users the illusion of simultaneous updates, etc. Glitch has been realized for C# and illustrated with several demos to show its support
for live programming.

I feel that the work is very well suited for a venue like Onward!. The paper articulates a new, interesting direction of work supported by good preliminary results.

The paper is well written and clearly presented. Its pro and cons, as well as how it relates to other alternative approaches for supporting managed time, all seem to be nicely delineated.

Conceptually, Glitch appears to be related to reversible debuggers, but is more general. It would seem helpful to add some discussions on this connection.

A few minor typos:
p4: "bag b will not contains x ..." => contain
p5: "Now suppose d becomes phase and ..." => "... becomes false ..."
p5: "... and like them must be debugged by the programmer.": wrong grammar
p7: "... has it's own time ..." => its

Writing quality: 4. Well-written

### Thanks for sharing. Keep up

Thanks for sharing. Keep up the good work, Sean. :)

### Updated paper/abstract to camera copy

Well, almost camera copy at least.

### web essay!

Complete with videos as before.

### Come on, no feedback at all?

Come on, no feedback at all?

### The presentation format is

The presentation format is nice. I still don't like the explanation of your semantics as finding a stable imperative execution, since imperative programs are inherently non-commutative. But I think the style of programming you end up with has some nice features to it. I wonder if you could recast some of it into constraints (x.insert(y) gets translated into a constraint that the the value x contains the value y, for example).

### not idempotent

Constraints tend to be idempotent, which makes them easier to work with than the sort of sum x; x += 1 example.

I suppose if you represented these constraints within some sort of uniqueness-monad (such that you can obtain a unique value as an ambient authority) it would fit pretty well. A simple sum can be modeled on a column for a relation keyed by unique values.

Most state updates aren't idempotent and are instead "memoized," finding on replay that we are already doing this so just sustain it. I like to use the term "semi imperative" to mean imperative + commutative.

Constraints are merely state updates right? I mean, if you look at the physics examples, that is essentially what "force += ...." is doing. The problem with going the other way is that constraints don't seek to be expressive enough to deal elegantly with priority or identity.

### Nice format and examples

I really like the exposition format and the examples. The content doesn't surprise me at this point, so I don't have much else to say.

I think you'll need to ask for a fresh pair of eyes to find issues of obscurity.

### longer timelines

Have you considered an logarithmic history model to keep older frames around, by losing intermediate frames? This is related to 'tower of hanoi backup' model, and can be generalized deterministically using multiple ring buffers or probabilistically a single buffer with pseudorandom Poisson elimination of old frames [1][2][3]. I think a logarithmic history would serve PX/UX much better in this role than a flat 100-frame ring bufer.

### The current prototype can't

The current prototype can't with that yet, the problem is that the frames are always in sync and this would require some frames in between others with missing re-computable information. At least with a 100 frame cutoff, we can just say everything before can't be replayed (and even that was superhard to implement!).

### How does this compare with extempore?

Nice! How does this compare with live programming languages like Extempore?

### Live coding vs. live programming

I'm using live programming as coined in Chris Hancock's 2003 thesis; somewhere down the line the term was mixed up to mean the same thing as live coding, and after much arguing with Alex McLean I've come to some sort of realization on what the different meanings are.

Extempore is a live coding environment, built with the philosophy of improvised (and sometimes/often performed) programming. You build your program "live" while it is running, perhaps in front of an audience. YinYang is a live programming environment, designed to provide execution feedback to the developer while they are writing code. The differences in philosophy is huge even if both involve "live" feedback: live programming is oriented around better program debugging and can include features like time travel; live coding is oriented around changing a running system where time travel doesn't make sense (since the program is running for reals, not just being debugged).

Imagine if you were trying to prevent a nuclear plant from melting down by changing its code in real time, you would really want a live coding environment at that point. But if instead you were trying to develop better nuclear plant software tested via simulation, a live programming environment might make more sense.

And of course live programming is just a word ascribed with arbitrary meaning, but its useful to look at what the different meanings are.

### Simulation time vs real time

Thanks. Yes, I had meant to write live coding. Sorry.

I wasn't thinking comparing their purpose but more how time is managed/expressed in each language. Sounds like YinYang operates in simulation time and so you have much more flexibility than a real time system. But the question is how much support is in the language itself as opposed to tools. Time travel is implemented by the environment, right? Even in a "real time" system you can fake out the clock and if you allow that, may be you can potentially use environment like YinYang....

The "on event" construct seems very much like Verilog's @event construct. I don't see anything special about it -- As an example, in my hypothetical language I can simply provide a way to create real time streams (return 1 byte every 1/44100 second) and condition some computation on reading from it -- similar to how you'd recompute what gets displayed on the screen every 1/60th second. But if the language supports a notion of time, would have to provide a real time guarantee (I guess Extempore must provide such a guarantee). A simulator doesn't have to work as hard.

But I can see value in a mixed mode environment. You may for instance design your fancy reactor software using a live feedback system but you may also want to run "what if" scenarios once it is operational. Basically a debugger is nothing but a visualizer so why shouldn't it be used in production.

We once built a simulator where the simulated network switch used simulation time but we also provided a way to run real network traffic through it (for example, ssh-ing though the simulated switch). It was very handy. In networking is also not unusual to capture a trace of packets and replay them later. If YinYang like environment can somehow be used to improve debuggability at runtime that'd be great.

### This isn't just simulation,

This isn't just simulation, a program with YinYang can interact with the outside world (you just can't undo any operation that leaves the computer, obviously). But changing the past (by changing code) becomes problematic in that case, you can't really allow it. Glitch can also allow events to be processed out of order, but only if undoable operations are held in uncommitted states until they are known to really be done.

Glitch (the programming model for YinYang) has an explicit notion of time via events, and so can do interesting things like provide a notion of when a frame is consistent and do things like time travel. I believe most live coding systems are more in the "now", achieving reactivity via a direct while loop; Glitch uses replay.

The point of "on event" is not so much that event handling is better in YinYang, it is that it can be done at all given the continuous execution semantics of code outside the event handler. Real programs simply need both continuous and discrete control flow abstractions! (and even continuous after discrete control flow abstractions)

### re: real time and live coding

if the language supports a notion of time, would have to provide a real time guarantee

Most live coding environments do not provide any hard real-time guarantees. Indeed, most of them use Turing complete languages that can diverge if the programmer is not careful, and lack any analysis for hard real-time properties.

I haven't read much about Extempore's xtlang, but it seems to have the full power of a Scheme. Extempore's DSL for digital signal processing might be real time, though.

### Talk given, the reception

Talk given, the reception was very positive.