2009 Lang.NET

2009 Lang.NET videos are available here

Some of the speakers include:
Lars Bak (Google Chrome Javascript VM)
Gilad Bracha
Anders Hejlsberg
Erik Meijer
John Rose (JVM)
Philip Wadler

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Karl Prosser, Building DSLs

Karl Prosser, Building DSLs and language enhancements with PowerShell, sounds interesting. Haven't watched it yet.

Videos are Windows-only (Silverlight 2.0 apparently)

Why aren't they posted in an open format?

Siverlight client is cross-platform

You can install Silverlight plugin on linux or Mac.

Moonlight doesn't have

Moonlight doesn't have support for Silverlight 2.0.

Are you sure about this?

Are you sure about this? Miguel mentioned that Moonlight's security sandbox was close to completion, which was the major issue preventing its general Novell-sponsored release.

It's annoying, but you can

It's annoying, but you can download the video files here:

Lang.NET 2009 on Channel 9

This is not spam... There will be some very interesting discussions on Channel 9 with Gilad Bracha, Anders Hejlsberg, Erik Meijer, Lars Bak, Emmanuel Stapf, Philip Wadler, John Rose, Martin Fowler(from DSL DevCon 2009) and more. Over time, these filmed discussions will appear here Currently, Gilad and Anders discussing language design is airing. C

They will also play in Safari on the Mac

In case you have a Mac, installing the silverlight player and watching should work. (It worked for me).


Interesting, I had never come across Irony.NET before, so I was writing my own left-recursive parser combinator library for C# (still incomplete), and am currently working on type-safe expression trees, so there seems to be a lot of overlap with Irony. Irony's example grammar is a little more verbose than I'd like, but it seems to support some nice features, like operator precedence and associativity.

Eric Meijer's talk was good: events, enumerators and behaviours

He raised a good point about events and behaviours: basically, events are the dual of ordinary enumerators/iterators, and a special abstraction for continuous functions, ie. behaviours, is thus simply unnecessary. Behaviours == enumerators. I haven't seen this described elsewhere, but it makes perfect sense.

Another way I have thought

Another way I have thought about it is in terms of push/pull.

Hmmm, I didn't see his talk

Hmmm, I didn't see his talk but I've seen the slides, maybe I haven't really absorbed this work yet.

To me, events are like enumerators/iterators: you are exposed to retrieving each element though getting the element is convenient. Behaviors are more like comprehensions, where all that plumbing is hidden from you. The advantage of a behavior over an event stream is that the behavior is logically one value that just happens to change over time, so you can operate on it as if it was one value; e.g., if (a) b = 42, for an event stream, you have to manage everything yourself.

I think the distinction between events and behaviors goes beyond push/pull, behaviors are simply higher level abstractions that hide more plumbing. In a pure declarative system, you wouldn't have events as they expose you to discrete points in time (there is a notion of "now" context when the event occurs) which you could not reason about in a timeless system (no notion of now). A behavior, you don't see the changes, so you aren't exposed to "now".

Behaviors are more like

Behaviors are more like comprehensions, where all that plumbing is hidden from you.

The LINQ interface over IEnumerable sounds like exactly this. If I understand correctly, behaviours and events are respectively discrete and continuous functions of time. An efficient realization of a continuous function is simply the enumerator interface since no waiting is required. An efficient realization of a discrete function requires an asynchronous semantics, aka push. What else is missing?

I'm not sure what you mean

I'm not sure what you mean by the latter, 'requires an async semantics aka push'. What about something like Esterel's compilation strategy? Similarly, for the functional version, our favorite FRP. Both take advantage of the synchronous nature of typical event systems. Maybe I misunderstood?

We don't want to block

We don't want to block waiting for events, we want events to notify us when they've changed. FrTime uses such a push model for efficient event update.

You can use a push

You can use a push implementation for a synchronous semantics. FrTime is actually a great example! Subexpressions push their values, but expressions waits based on the topological order for their subexpressions (really achieved through the scheduler), so it still gives an orderly notion of timesteps and orderly updates (synchronous reactions).

I'd actually argue this isn't very asynchronous; being able to decouple bigger chunks is important for, say, parallelization. I've played with isolating bigger chunks, but it's hard to isolate a sufficiently interesting loose coupling.

In WPF databinding model,

In WPF databinding model, its asynchronous semantics: you setup the relationship then the engine decides when to push or pull data, push or pull is mostly irrelevant (WPF seems to be broadcast based though). SuperGlue behaviors are pull-based (sinks pull data from sources) but invalidation is push-based (sources push invalidations to sinks, then sources pull data back from sinks to refresh themselves).

When you think of behaviors like events, you lose so much. A behavior as its own abstraction is like a dynamic type: you can prioritize behaviors and use them to guard other behaviors, declaratively creating intricate networks of behavior (SuperGlue). If a behavior becomes merely a stream of events, it loses its ability to act as a type, and basically you just have data over the wire.


Richard Clayton is a security expert at Cambridge University. He teaches the security course there, and the first class he tells students that the goal of the course is to teach students why not to roll-their-own of the following: protocols, hash functions, modes of operation, block ciphers. Instead, he wants to teach students to use peer-reviewed stuff.

I feel the same way for data binding. There is no peer review process that checks the quality of the design. Regardless of whether you solve this problem at the language level or object level.

WPF seems to be broadcast based though

This depends on your notion of "broadcast based". Strictly speaking, there is no "currency manager" that broadcasts messages (as in WinForms data binding). In WinForms (and Java-based UI toolkits), the critical design flaw is access path dependence, which was an intentional design flaw (in the sense that its designers misjudged that it would somehow make life easier). In short, pointer aliasing. Effectively, I'm saying it is silly to say something is broadcast-based if it can't realize all aliased subscriptions can be coalesced into a canonical model.

Strictly speaking, the notion of a "currency manager" is a false guarantee, and what you really want is the data independence principle. Codd outlined the case for this principle really well almost 40 years ago, and his second biggest reason for having a relational model was communication between untrusted systems with accordingly incompatible type systems.

Also, thinking about trust determines a lot about how WPF databinding is set-up within a given application. If you don't have control over the model, then you are severely limited by how you can use the API. Like all data flow models, trust boundaries determine a lot about actual control flow.

Let's take a look at how Silverlight works - it is based on the flawed Browser model proposed by Tim Berners-Lee. You cannot write a generic function that returns a generic, closed-type over a WCF operation. Why? B/c Silverlight is built upon an asynchronous communication model that is tightly coupled to callback handlers; you need to know the target of your operation and your operation's end-result. This doesn't make any sense from a modularity standpoint, and it also disallows easier-to-understand client/server communication methods. The workaround is to send back XML, but now you are in the untyped data set world, when you have no reason to expose the client to having to decode this protocol. The type negotiation should occur at the protocol layer, not the client layer. Indirectly, this is a poorly designed trust system. More basically, it designs an API ahead of the actual protocol, and as such repeats the mistakes of earlier Microsoft data access technologies like COM Object-Linking-and-Embedding (OLE).

I'd also say the version of the data binding model matters. Prior to 3.5 SP1, it was rather broken and there is still the $10 bug, and some of the fixes have resulted in a rather monolithic (and frankly confusing, brittle, hard-to-understand, hard-to-master) solution. There is also the fact that exceptions generated within custom value converters are not treated the same as framework built-ins!

What is simply infuriating is the design of Silverlight 3's local messaging feature learns nothing from any of these mistakes.

This lack of clear design -- not understanding trust, not considering how to work with models, etc. -- is emblematic of poor engineering everywhere. Quoting Ch. 12 of Professional Java Development with the Spring Framework:

Data binding with Struts is done using ActionForms. For every form, you have to extend the concrete ActionForm class, which prevents you from reusing your domain model and working with it directly in your HTML forms. This requirement is widely considered to be one of the major design flaws in Struts. After the form data is transported to an ActionForm object, you have to extract it yourself and modify your domain model accordingly — an extra step that isn't required by WebWork and Spring. Dynamic forms (introduced in Struts 1.1) make the whole data binding process less painful, but you still have to extend a concrete class.

For sure, FRP discipline can

For sure, FRP discipline can help with some of these problems, and many people are moving in this direction. Actually, I think the first step is just making sure people know that data binding is a primitive form of FRP or constraint systems, so that they can borrow from these systems in the future.

Error propagation is a good example. In SuperGlue, we handled error conditions as possible values, propagated through the connection graph until they hit a sink that would know what to do with it. This works out pretty well, all connections are typed by data value but also have other APIs to query error conditions.

Another strategy, as described in the live programming paper, is to be as error tolerant as possible, never throw, just report and move on; e.g., integer divide by zero could evaluate a default int value (e.g., zero) as well as propagating the divide by zero error. This keeps the program going (live) while ensuring the error is propagated (so you know your zero is bogus if you want to know). Nothing annoys me more than Math.Abs(double.NaN) throwing in my physics engine!

In general, exception handling is a bad idea when interacting with declarative abstractions like data binding.

exception handling and declarative programming

Mmm... exception handling is a bad idea when it is really the result of bad programming requiring exceptions to be thrown.

In WPF, the most common case for throwing an exception inside a value conversion is due to the monolithic design of value conversion itself.

#region IValueConverter implementation
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
// code that is good
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)

throw new NotImplementedException();

In my humble opinion, this doesn't make any sense at all. The whole reason the exception is being thrown is because of an Impostor Type. The type claims to be supportive of the IValueConverter type contract, but it is actually not. It would make more sense to break this into two interfaces, and just not implement the interface contract for ConvertBack.

On top of that, another scenario where I see exception handling unnecessarily used is three-valued logic. Without support for three-value logic, you can't say "When I have a known value, execute this declarative model." Thus, without three-valued logic, I don't think truly declarative data binding is possible -- it requires too much indirection to achieve otherwise. There are alternative ways to model this, such as with proxies that intercept variable assignments with a cut-point after assignment, sending an "I'm Done Assigning" message to the declarative model (which the declarative model recognizes as a notice that it can execute). Yet, it all breaks down to three-valued logic.

An "I'm Done" message seems

An "I'm Done" message seems out of character for a data-driven system: FRP, lenses, data flow et al are supposed to be getting you out of this mess by moving coordination into the runtime (as opposed to, say, a raw CSP, which glorifies it). I don't really understand this post at all, especially the appeal to a TVL -- any chance you can rephrase this?

As for error handling in data binding systems... I suspect most implementations arose out of convenience and lack of better ideas. They're horrible.

I don't see a huge

Regarding "I'm Done": I don't see a huge conceptual problem, but perhaps I lack the expertise to discern why it matters.

The "I'm Done" message is internal to the system and is not exposed to the client programmer. At the DSL level, we want to make a declarative language for describing how to react to system events. A common system event is when a previously unknown value now has a value. Conceptually, you can think of this in any number of ways: The first value that pops into an empty stream (FRP), the first assignment to a mutable variable as an event (OO), or as a closed world assumption where you simply treat unknown truth values as false. The client programmer should think in terms of "when this is true, I must also simultaneously make this true" -- equational reasoning.

Three-Valued Logic is the best way to model unknown values or empty values in .NET, particularly WPF.. The key here is that there is no normal function you can define for something that conceptually isn't true or false. You need a way to "kickstart" the system. In functional reactive programming, this kickstart is monads! You don't have to use TVL, either. You could model your entire program using circumscription-based logic and use a closed world assumption where things that aren't known to be true are treated as false. However, I just find TVL easier because it is only used sparingly as a way to kickstart components in an intricate composite relationship that requires I/O to fill out unknowns. In either system, closed world or TVL, you are still writing the same conceptual check: whether the an action makes a condition true (and to REACT to kickstart a chain of events). This is different from the action itself knowing whether a condition becomes true, which is tightly coupled and who wants to do a GOTO (CSP) way anyway.

FRP, lenses, data flow et al are supposed to be getting you out of this mess by moving coordination into the runtime (as opposed to, say, a raw CSP, which glorifies it)

I don't see how FRP moves coordination into a runtime, and am not sure even what runtime you could be referring to. In FRP, results are made to vary over time (either continuous or discrete events) by defining pure functions that operate on streams of values. Coordination is based upon when new values enter those streams! A new value in the stream causes the function to re-evaluate, so to speak. Forget runtime! Coordination is entirely predicated on the pure function being forced to re-evaluate itself due to the function's definition guaranteeing that it always reflects the current value in the stream, and that value is based on other pure functions. The re-evaluation is mostly declarative in the sense that the programmer doesn't call the function, the function "just exists." Other pure functions may have functional dependencies on it, allowing interrelated, time-varying state to be specified in terms of dependencies.

I don't know enough about the lenses approach to comment. I have at best read the original paper some long time ago and mostly forgot it.

As for error handling in data binding systems... I suspect most implementations arose out of convenience and lack of better ideas. They're horrible.

With programming, the devil is in the details, always. But "MICROSOFT data binding" has arose out of the desire to have a drag-and-droppable experience for Visual Studio. Back in the VB days, data binding was strictly for weenies he couldn't write code to populate a data grid -- and that was really the original notion of a "connected record set" came about. From this customer problem-oriented thinking, you had solutions like "ADO Data Shaping Language" that could declaratively hook-up an OLAP data grid for you.