Future of Programming Videos, final program

The final program from the SPLASH Future of Programming workshop is up. A lot of great videos with a lot of great ideas for the future of our field!

Contents with brief one liners (see the program page for more detailed abstracts):

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: Shadershop

Shadershop, frankly, makes me mad. What is the freaking point if it isn't rendering the actual results in real time so I can see what it actually looks like?! Nice try but no cigar.

It does render the actual

It does render the actual results... See the third video here, which you can get working by going to the demo and hitting ctrl+3.

augh!

then the ui is killing me. or the instructions. or something. the rendered view should always be there from the get-go, not via some arcane insane keyboard shortcut. sheesh.

i'm just mad because this system is *genius* and i want to use it so much... and then it *utterly misses the ui/ux goal* in my mind with the one simple failure.

it is like the local patisserie that doesn't make the chocolate croissants before 9:30am. WTF!?!?!?!? you're a FRENCH BAKERY. in the USfreakingA. how can you NOT have CHOCOLATE CROISSANTs available EVERY FREAKING SECOND YOU ARE OPEN TO THE PUBLIC. this is BASIC STUFF here.

I'm feeling an Alanis

I'm feeling an Alanis Morissette song in this comment.

Re: Omnicient Debugging

I love that stuff, desperately want it, and sure wish it were de rigueur in languages.

This has come up several (not many, tho, I guess) times in various systems. Java, C++, JavaScript, etc. But it never seems to get enough leverage or mindshare.

So what I want to know is, will somebody be able to make a generic toolkit that will help other people put this kind of functionality into their own language ecosystem?

Maybe I really should just go crawling back to MSFT, to use F# and Visual Studio.

Re: Céu

The synchronous execution model of Céu is based on the hypothesis that internal reactions run infinitely faster in comparison to the rate of external events. An internal reaction is the set of computations that execute when an external event occurs. Conceptually, a program takes no time on step 2 and is always idle on step 3. In practice, if a new external input event occurs while a reaction chain is running (step 2), it is enqueued to run in the next reaction.

That seems really rather specious to me. Especially for games. I mean, my horribly crappy under-powered laggy slow (cheap!) Android device daily reminds me of how bad a user experience that can be.

also

i worry that waking up everything and not having a way for one 'thread' to prevent another will lead to rats nest code. at least, i infer that from having heard about behavioral threads.

Synchronous concurrency is a

Synchronous concurrency is a big compromise but gives determinism and a simple/effective semantics for trail abortion (the "par/or").
Your concern is about real parallelism? (Céu also offers async/thread blocks to execute code in real threads with a share-nothing policy, but it is a very atypical use of the language.)

You can have some level

You can have some level without determinism without being synchronous. It requires some trade offs though: either no cycles or monotonic progression.

In theory we can statically

In theory we can statically infer "trails" not sharing memory to execute them asynchronously in parallel while preserving determinism. But our initial efforts show that it becomes actually slower in practice because of data locality (the memory is a contiguous sequential block, related to trail abortion). Does Glitch use multi cores in the implementation?

Yes! The current prototype

Yes! The current prototype I'm working on supports full concurrent repair of tasks that share memory. It is speculative and works likes transactions: if a task reads memory that is then written by another, the reading task is just rescheduled for repair. I had to get rid of all reader locks to accomplish this, leading to some very interesting relativistic data structures.

Given Glitch's overheads (nowhere close to native, book keeping overhead, versioned memory), you can actually see a decent performance increase in many cases: 2.5X on some microbenchmarks given 5 or 6 hyperthreads on 4 actual cores that sport a total of 8 hyperthreads (using all hyperthreads is usually not a good idea); the microbenchmarks involve a lot of contention and are used primarily to fuzz test correctness. Where you don't see a performance increase, it is because the speculation is highly unfounded, leading to some thrashing (though a correct result is still produced).

But note that one answer is produced only if the program has one answer. This is only guaranteed if the state relationships are acyclic or they increase monotonically (this makes sense to anyone who has worked on iterative algorithms). So consider:

a = 5
b = 5
if a < 10:
  b = 10
if b < 10:
  a = 10

Assume the last two assignments have a higher priority than the first two, but are otherwise unordered. A correct solution here is either a = 5, b = 10, or a = 10, b = 5 given the cyclic relationship between a and b as well as the fact that this isn't monotonic. The answer is then non-deterministic because the problem is.

STM

This sounds like software transactional memory. Does your implementation differ in any significant way from STM? I am interested in using STM for a parallel functional language to prevent garbage collector pauses.

Transactions and STM usually

Transactions and STM usually only support "abort"; there is no notion of consistency in which to base "retry" on so they just leave it up to the user. Glitch will replay everything until a well-defined notion of consistency is achieved. It resembles more FRP or SAC in some sense and the same mechanisms that allow for optimistic parallelism support incremental computation (so transactions, SAC, FRP, constraint solving...).

Not sure if this would help out for GC (objects in Glitch are not collected until they are stopped being created, so GC as we know it isn't really necessary).

Composable Memory Transactions

I am referring to Composable Memory Transactions as implemented by implemented by Tim Harris et al:
http://research.microsoft.com/pubs/67418/2005-ppopp-composable.pdf
Which I now realise is different from the original STM work, in that it includes scanning the log and retrying.

Nice, but still quite

Nice, but still quite different. Glitch isn't structured at all, it just let's all threads go as fast as they want with no blocking and fixes the broken pieces via replay. It is designed to be low effort for the programmer.

No Blocking?

I am confused, I didn't see any blocking in the composable transactions, but its possible I misunderstood. To me it looks like all treads proceed at full speed without blocking, and only if two transactions clash (try and write to the same memory) one is rolled-back and retried later. This relies on random probability that the same transaction does not keep getting retried forever - but in the same way all the air molecules in the room don't jump into the same corner at the same time leaving a vacuum, it doesn't happen in practice.

This is just a quick grock,

This is just a quick grock, but in the abstract:

All the usual benefits of transactional memory are present (e.g. freedom from deadlock), but in addition we describe new modular forms of blocking and choice that have been inaccessible in earlier work.

And later:

Unlike most existing approaches, the programmer does not have to identify the condition under which the transaction can run to completion: retry can occur anywhere within the transaction, blocking it until an alternative execution path becomes possible.

Also, unlike a transaction system, Glitch doesn't rollback on failure. This is very key: we only rollback after a retry has completed and even then only for the effects no longer being performed (effects that were done before and after the retry are preserved). By not rolling back aggressively, we avoid rollback storms and have a story for incremental programming that doesn't involve memoization (tasks, unlike transactions, can be re-executed when what they depend on changes over time). On the other hand, cycles have to be dealt with.

Transactions

I can see you don't group operations into transactions. How do you determine which operations should be affected by writes from other threads. For example, sometimes I read a value, do some processing, and I would want it to restart the processing from the beginning if anyone else changed the value. Other times I don't care if someone else changes the value, I want to carry on processing and write the answer back.

That is easy: all reads are

That is easy: all reads are affected by all writes in the normal case; this is fixed by the programming model. If you want do a read that won't be affected by a future write, you have to use an event handler where reads are still dependent on past writes, but not future ones.

This is the only model that makes sense in the presence of full-on live programming: if we make a code change that causes the whole history of the program to change (i.e. affected how past writes were done), then those reads in event handlers need to be replayed.

Interesting

So to take the airline seat booking example, when I read the seating plan, I drop and event listener on the seats, and update the free-seat view in real-time. When someone selects a seat, I mark it reserved, take payment, and either free or book dependent on success.

There is still a time window, when someone else has reserved the seat, but before I have received the notification, when the user can select an already reserved seat? How do you deal with that?

If you don't allow the past

If you don't allow the past to change indefinitely (which is really only useful for live programming), there is a point in the program execution where you know that nothing will change before time "t" where you can commit undo-able operations (confirming a reservation, or launching a nuclear missile). This point can be identified by having nothing in the dirty queue or being executed at or before it.

Queue Synchronisation.

Does this assume only a single CPU, or do you synchronise the access to the dirty queue?

There are lots of locks for

There are lots of locks for writing, just not for reading. But the queue is shared and threads take work off of it when they have nothing to do. I tried pinning tasks to certain threads (so one dirty queue per thread) but that was too slow.

Games, maybe not, critical systems, sure

That seems really rather specious to me. Especially for games

Apart from the input-queueing thing, this is the synchronous execution model that underlies nearly all successful research in high-level programming languages for critical systems from the last thirty years. Languages following this model, such as Lustre, are now routinely used in critical industrial systems, including all the recent Airbus planes. It would be interesting to know whether this could be successfully applied to real-world interactive applications such as games.

good to hear

It is a good point. And, yes, probably most games ever written were single threaded. (That is less so in the last decade, but there were lots of games before then of course.)

So maybe? I think what bugs me is that the description makes it sound like the default approach is to never throw events away?

Like, it depends on the context of where the system finally runs. If it is competing on my memory starved Android device with god-knows-how-many-other-things, then it is probably outside the envelope it was designed for.

Well, at least, if it wasn't designed in the first place to handle lag.

Excellent stuff, especially

Excellent stuff, especially lamdu. I know I'm repeating my hobby horse, but it would be so cool if the loop were closed and the IDE be made extensible from within the language. That would give you macros/DSLs done right: defining a new language construct amounts to defining a new UI element for the IDE. For example to add regex literals the language you'd implement a UI for editing regexes, and then add it to the IDE via a construct analogous to defmacro.