Sample applications for programming languages

Recently there has been some interest in the role formal proof and experimental/statistical evidence to show claims of usefulness in programming language papers. On the one hand, formal proofs often prove properties that are widely believed to be correlated with usefulness (such as type system soundness proofs). Experimental evidence is hard to come by, sometimes unreasonably so for papers introducing something new, especially if the something under study has a long learning curve. On the other hand, the end users of PLT efforts are humans (programmers). Humans are complex and we cannot hope for formal proof that something is actually useful.

This got me thinking about why the usefulness of things developed in papers often seems doubtful, despite formal proofs. Type system papers have lots of formal proofs that prove useful properties like soundness. FRP papers have proofs or derivations that an implementation implements a formal model. Papers introducing new programming constructs have proofs. Why is it that after reading such papers it's unclear that the results are actually useful? Because for every type system that is sound and useful, there are hundreds that are sound and not useful. A proof that a (e.g. FRP) system implementation implements a formal model correctly gives confidence in that fact, but not in that the formal model is actually useful. That there are proofs about a programming language construct does not at all imply that the construct is useful.

What would, along with the proofs, give more confidence in the usefulness without requiring as much effort as statistical experiments with users? One possibility is a collection of sample applications. Papers often have a small 5 line toy motivating example but rarely go beyond that. For example the Coherence stuff has a small example of tasks with start length and end. While a system to execute callbacks in a deterministic way could be useful, such a example shows neither the importance of the problem nor the usefulness of the solution beyond reasonable doubt. This was also mentioned on the author's blog. If there was a library of sample applications to go along with the paper, that would. Note that I'm not saying that this is a bad paper at all, just that the level of examples is the same as most other papers.

My question is threefold:

  1. Do you think it is useful having a collection of sample applications of the stuff developed in the paper, perhaps in an appendix or on the paper's website?
  2. Which existing papers already had this?
  3. What are good example applications for each different domain? For example an email client or a todo list application for UI papers such as Coherence or FRP, and perhaps a Javascript parser for papers investigating new parser technology papers such as Yacc is dead and OMeta.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I think it is, for the most

I think it is, for the most part, an essential part of the scientific research process. That many researchers don't always even implement their languages (I won't name names) is disappointing and makes their results unreliable for me. This is important even for theoretical work -- eg, if a type system rejects programs, are they important? Does a new proof technique work on proofs we care about, or just the isolated example for a paper? I strive for this in most of my papers.

The lack of experiments often seems due to publish-or-perish -- a decent implentation and experimentation is time consuming -- but in the defense of some, experiments may be irrelevant when one views languages as a purely mathematical discipline. Math, not science nor engineering.

Finally, picking examples is important but hard. What is described in a paper may be purely pedagogical. Benchmarks examples may (and likely should) be completely from explanatory ones. The choice seems too domain dependent . E.g., consider, FRP, which has gone from 3d environments to robots to GUIs and ajax to networking/servers. We found drag and drop an interesting specification problem: it's a good test for other UI systems (and maybe general memory behavior), but is it as important for another robotics DSL?

Btw, I'd argue the FRP community is/was one of the better ones in terms of experimental analysis. Your comment on appendices hints at a potential issue for why many do not bother: it's hard to include this stuff in a writeup (as opposed to website).

Yes, I agree that

Yes, I agree that implementing the language should be an even more basic requirement.

You're right that the FRP community has done well in this regard. The early FRP papers had a lot of animation/games examples. For FrTime there even is a paper about interfacing with a GUI toolkit. The Flapjax paper is also excellent; there are a lot of examples on the website. What is lacking is GUI examples for purely functional FRP, and a method how to write composable GUI elements in FRP general. For traditional OO GUI frameworks, methods have been developed for how to do this, like MVC, MVP and MVVM. Perhaps it is obvious how to best define a Textbox GUI element and a List GUI element and a Button GUI element and from those build a Todolist element and how to put multiple Todolists together to make a todolist application, but I'm not seeing it. Or maybe this has been described in the literature? In any case building a non toy example would show how to do this, thus substantiating the claim that FRP lets one do GUIs in a more composable way.

Finally, picking examples is important but hard. What is described in a paper may be purely pedagogical. Benchmarks examples may (and likely should) be completely from explanatory ones. The choice seems too domain dependent . E.g., consider, FRP, which has gone from 3d environments to robots to GUIs and ajax to networking/servers. We found drag and drop an interesting specification problem: it's a good test for other UI systems (and maybe general memory behavior), but is it as important for another robotics DSL?

Perhaps benchmark applications are indeed not a good idea, and its a better idea to let this choice to the authors of each paper; examples are still very valuable, whether standardized or not.

For FRP GUI widget creation,

For FRP GUI widget creation, the idea is to define static layout functionally and FRP handles the changes.

I actually view this as wrong. Eg, in flapjax, we typically use CSS or JavaScript widgets (that happen to be exposed functionally). Frp is great for gluing functionally abstracted widgets.

These widgets are constraint systems, so functional specification is awkward. The rest of the team may disagree :) For the last year or two, I've been creating an attribute grammar system for specifying static GUI widget semantics. frp might still be used for interacting with the widgets, but specifying what goes where is easier with constraints. The compiler generates the functional abstractions, which aren't obvious manually. As a plus, I have been able to verify and parallelize the widgets :)

For FRP GUI widget creation,

For FRP GUI widget creation, the idea is to define static layout functionally and FRP handles the changes.

That is the theory that I heard many times, but I can't figure out how to actually do this elegantly. The problem is that widgets are stateful. Suppose you're creating a form to edit a Person in a CRM application. You might think that you can do something like this:

person = ...
firstnameWidget = textbox person.firstname
lastnameWidget = textbox person.lastname
personWidget = vertical [horizontal ["first name: ", firstnameWidget],
                         horizontal ["last name: ", lastnameWidget]]

That is, you define the UI as a function of the data, and you define the person object in terms of the widget, creating mutually recursive dataflow:

person = Person(initialFirstname `then` firstnameWidget.text,
                initialLastname `then` lastnameWidget.text)

This is all very elegant. The problem is that this doesn't work, because a textbox is a stateful thing. The cursor position, the text selection and the scrolling position is a stateful function of user input and not just a function of person.firstname. Of course you can fix this inside FRP by creating the right data flows, but in my experience you end up with exactly what you'd do in an OOP GUI toolkit, only now you're simulating the state by explicitly passing it around. So you've traded relatively easy to express callbacks for complicated data flow networks that simulate the same thing.

You can work around this by using existing stateful widgets as you suggest, but this just delays the problem one level because once you start doing real world applications you want to create your own composite widgets that have state. For example you want to create a Person widget for editing persons. You can of course express those in the traditional OO way with callbacks and state, but this hardly shows that FRP makes for compositional GUI construction.

These widgets are constraint systems, so functional specification is awkward. The rest of the team may disagree :) For the last year or two, I've been creating an attribute grammar system for specifying static GUI widget semantics. frp might still be used for interacting with the widgets, but specifying what goes where is easier with constraints. The compiler generates the functional abstractions, which aren't obvious manually. As a plus, I have been able to verify and parallelize the widgets :)

I'm not sure if I understand what you're saying. Are you talking about layout management, i.e. specifying how the widgets are positioned on the screen? Could you give an example of a problem that you solve, and how you solve it?

purely continuous FRP doesn't work

There is a reason Elliott included event streams along with hold in his initial functional animation paper. I haven't read a meaningful FRP proposal that didn't include some form of imperative state (e.g., through events and hold).

Edit: I should mention Kaleidoscope here, which was a whole research project in the early-mid 90s about nicely joining imperative programming with declarative (and imperative!) constraints.

Right, I even used it in my

Right, I even used it in my example, although for some reason unknown to me I remembered it as `then`. If you would show how to use that to do GUIs in a more composable and convenient way than with OO GUI frameworks I would be very grateful. Sure, you can transliterate callbacks to FRP by making every variable that each callback modifies a merge of event streams, but then what have you gained? I wouldn't call `hold` imperative state, just like `scan` or `fold` on lists isn't imperative state.

I believe the correct

I believe the correct terminology you want is "step" but I would have to go back and read Elliott's paper to get my terminology right.

I'm not the right guy to ask. I embrace state and alloe it to be implicit (though restricted manipulation of it). My FRP is less frp and more RP :)

I view the tasks of creating

I view the tasks of creating widgets, creating a user interface, and creating a widget layout framework as, at this point, best addressed through separate DSLs. In a browser, for example, we have JavaScript frameworks, AJAX applications, and CSS. I can tell the same story for Java and Flash. Maybe one day we can generalize the DSLs into one uber one, but that's not practical with today's techniques.

FRP is a nice fit for the AJAX app part. Instead of inverted callback code (I almost completely disagree with your repeated characterizations of side-effecting callback code as 'obvious' -- e.g., this has been analyzed as one of the primary causes of bugs and complexity in Adobe's software), we have a controlled way of packaging state over time. Someone reading your code can directly reason about value flow, rather than manually performing a whole program points-to analysis and reconstructing an approximation of what else is effecting any field and when. Almost by definition, the desired state gets passed as values. instead of uncontrolled assignments, you now have to explicitly merge.

Implementing the individual widget with FRP has more limited value: widgets will be used in an abstracted way, have additional performance considerations, and are generally otherwise already encapsulated.

Are you talking about layout management, i.e. specifying how the widgets are positioned on the screen? Could you give an example of a problem that you solve, and how you solve it?

Consider a command like paint(HBOX(BOX, VBOX(BOX, BOX), PARAGRAPH)). UI elements compose -- we can build a scene graph. We also have functional abstractions for working over them: change any input field, and out comes a different bitmap. We can even be pedantic and specify those changes with FRP if you don't agree with that :) Efficient layout systems will evaluate the positions as a series of tree traversals. E.g., bottom-up/top-down traversal sequence or 1 recursive visit for this language horizontal and vertical boxes.

However, now, imagine trying to throw in a "shrink-to-fit" box and allowing it to compose. For some interpretations of "shrink-to-fit" (e.g., CSS), you'll want an extra traversal to compute preferred minimum and maximum widths. Some features might even be expressed as fixedpoint formulas. An attribute grammar is a nice formalism for specifying this stuff: you define a widget as local constraints (the width of a box wrt the widths of its children), and the compiler figures out the tree traversals. The compiler can determine if the traversals can be parallelized, and even verify things like there is exactly one interpretation for any input tree.

Expressing this stuff functionally is tricky -- you would need to know the traversal sequence a priori, while here, the scheduler decides. If you don't care about the traversals, functional (demand-driven / lazy) programming isn't as bad, but still has some wrinkles (e.g., some of the verification conditions).

So you view FRP as a kind of

So you view FRP as a kind of glorified event filtering, where the end work that changes the UI is still done with imperative callbacks? Like in the Flapjax examples, where instead of defining the DOM as a time varying value, the DOM is manipulated imperatively with callbacks (e.g. http://www.flapjax-lang.org/try/index.html?edit=listdrag.html).

Someone reading your code can directly reason about value flow, rather than manually performing a whole program points-to analysis and reconstructing an approximation of what else is effecting any field and when.

Right, you have to do this to see the causes of a change if you use callbacks. Conversely with FRP you have to do the same points to analysis to see the effects of an event. Instead of explicit merge (fan-in) you have explicit fan-out. The latter actually seems more valuable to me. Which question would you more often ask: by what buttons is this value affected -or- what happens when I click this button? FRP allows you to easily answer the first question, callbacks allow you to easily answer the second. I think that both have their use cases, and it would be valuable to have a system that allows both to be used together.

Implementing the individual widget with FRP has more limited value: widgets will be used in an abstracted way, have additional performance considerations, and are generally otherwise already encapsulated.

Wouldn't you want to expose the interface to the widgets in a FRP way, instead of in a traditional imperative OO way (i.e. as if they were implemented with FRP)? If not, as I said then you delay the problem of how to implement widgets within FRP one level, because for a real world application you want to implement composite widgets, like a Person object editor (just like a textbox is an editor for a string object). Or are you saying that the best solution is to implement those with imperative OO as well? That is a valid view of FRP, turning it into glorified event filtering, and selling out to imperative OO for the difficult problems. It is not a success of FRP as originally envisioned however, so I'm still hoping for a better solution.

However, now, imagine trying to throw in a "shrink-to-fit" box and allowing it to compose. For some interpretations of "shrink-to-fit" (e.g., CSS), you'll want an extra traversal to compute preferred minimum and maximum widths. Some features might even be expressed as fixedpoint formulas. An attribute grammar is a nice formalism for specifying this stuff: you define a widget as local constraints (the width of a box wrt the widths of its children), and the compiler figures out the tree traversals. The compiler can determine if the traversals can be parallelized, and even verify things like there is exactly one interpretation for any input tree.

I see what you mean and I agree, but how to implement a layout algorithm is largely orthogonal to how you expose widgets to the FRP part of the application?

Conversely with FRP you have

Conversely with FRP you have to do the same points to analysis to see the effects of an event

Yes, you'd have to do a taint/impact analysis to see where the value gets used. However, I argue you don't have to worry about it (as much) in this formulation; any code consuming the value must explicitly compose it over time. E.g., consider concurrent animations: with effects, you might have a weird flickr when two occur simultaneously, while in the value-based approach, you can't implicitly compose. You'll have to decide whether the composition is to sequence, add coordinates, prioritize, etc.

Wouldn't you want to expose the interface to the widgets in a FRP way, instead of in a traditional imperative OO way

I agree, but don't view it as that big of a deal. Shriram's paper about lifting Racket's GUI system showed these aren't far apart using macros. We did find with just lambdas for CSS. There'll be little differences, but modern languages, at least dynamic ones, don't make this too hard. I also agree that there's still some rough edges between the vision of FRP and what we can do today in practice -- and pursuant to this thread, a lot of that becomes clear when you actually write applications :)

I see what you mean and I agree, but how to implement a layout algorithm is largely orthogonal to how you expose widgets to the FRP part of the application?

A GUI widget must somehow interact with the layout of other widgets, and that isn't just monadically passing a canvas. Abstracting out the layout system from the widget system is actually already a really big step that I believe is underappreciated. How to do that is tricky, including how the widget system should interface with it. E.g., in CSS, people often go outside of the layout system with new layout widgets in JavaScript, which doesn't really work well with the rest of it -- something like jQuery (with its extensions) is like reimplementing a browser inside your browser. Instead of doing that, how do we build the browser to beginwith? What happens when part of the layout is in scripts/FRP and the rest a restricted constraint language? Once my current deadline is done, I have some fun ideas I want to play with there :)

A GUI widget must somehow

A GUI widget must somehow interact with the layout of other widgets, and that isn't just monadically passing a canvas. Abstracting out the layout system from the widget system is actually already a really big step that I believe is underappreciated.

I always thought it would be best to go with something like a physics engine here. Widgets are physical objects that exist in a shared physical space, whose positions are constrained by forces such as springs, magnets, collisions, pressure, and so on. Not only is this kind of layout usable, but it has a nice declarative constraint-based solution! Edit: the UIs would be more on the naturalistic side, however.

ZUI vs. Physics

My own interest is zoomable user interfaces, and ad-hoc 'views' that support application mashups and data fusion. An important feature of a ZUI is that the costs (CPU, memory, network, etc.) are proportional to the view (screen real-estate).

It seems to me that FRP could be an effective basis for ZUI, though there is the niggling detail about releasing signal sources when we aren't viewing them (e.g. turn off a video camera when we aren't looking at it). RDP eventually grew out of trying to solve this requirement.

While I find the physics approaches fascinating, it seems to me that they are not suitable for ZUI. The simulation burden is typically O(N3) or worse with the total number of elements and absolute size of the environment, even when that environment is not 'on-screen'. One could implement a hackish model where one only animates the objects in view, but this would often seem unnatural and so would need a lot of tuning by developers.

The two approaches can be combined by providing a ZUI view of a physics simulation of objects with which we can associate widgets. There are several advantages to this separation: (a) we can present multiple views or perspectives on the same physics model (e.g. different locations in or angles on the model), (b) multiple views of the simulation can also support CSCW and migration of applications between desktops, (c) developers have more precise control over whether and how physics react to an observer, (d) keeping the physics simulation 'external' integrates seamlessly with placing widgets on a map or in a video, e.g. for command and control or augmented reality.

It seems to me the tight coupling of widgets to physics is a mistake, but it wouldn't hurt to make this coupling very easy when we want it. For tablets and iPhones, making a UI that 'floats' like a rubber ducky on water can, at least, be very pretty.

The nice thing about the

The nice thing about the physics approach is that constraints need not be solved in "one step," rather they are solved iteratively via time-based integration. For one, you get animation "for free" as the user get's to view intermediate results. But mainly, you can solve harder problems (over time) that would otherwise be too expensive to solve in one step using FRP.

The danger of course is over-constraining the problem in a way that causes the simulation to become unstable, as well as the power cost of continuous simulation (although we can rest objects that are not moving and have no interesting forces applied to them).

I see widgets as physical objects, so subjecting them to physical simulation isn't a big deal. Best to separate the model from the view, but not the view from multiple view-like details.

Iterative solutions

FRP is quite able to model physics - bouncing balls and the like. I agree that we do gain some benefits from physical models, but any iterative model would do - and some might have nicer properties than physics. One might consider a multi-agent model, for example, where we treat widgets as biological entities growing around one another, or social entities vying or cooperating for a carefully partitioned resource: human attention.

Anyhow, it really is quite difficult to arrange physical things to accomplish a targeted result, and the whole issue of initial setup becomes its own constraint problem! Same for social things.

I do understand your treatment of widgets as physical objects, like physical dongles, toggles, and buttons. But I think that's pretty far removed from the 'purpose', and does not necessarily make for a good (predictable, efficient, zoomable, accessible, extensible, etc.) GUI.

The goal is not to manipulate widgets, but rather the consequence of manipulating widgets - i.e. shaping and programming the environment: sensors, actuators, data fusion, command and control, social interaction. A physical model for widgets is not a view-like detail. (Really, it's a model-like detail.)

FRP is only capable of

FRP is only capable of tweening, so you could only get a ball to bounce via a sin-based tween function. I would consider any iterative model to be physics-like, you are just defining different kinds of forces. We did a prototype for Surface that was based just on that: attention is nicely modeled with Turing heat diffusion.

We can agree to disagree on the rest. To me, physics is a presentation detail, I don't think further separation of the presentation layer is very useful (and in fact, can be harmful). Of course, physics could be used in the data layer, but that is a very separate thing.

FRP is stateful

FRP is stateful, at least as Conal Elliott and Paul Hudak originally defined it. That includes continuous state. Integral is a first-class FRP signal transform. Example uses of integral are available in the seminal FRP paper - Fran - and in Hudak's work on functional reactive robotics (Frob, which eventually became Yampa). Causal accumulation of 'events' is not the only form of state available.

So, no, you are wrong about FRP's capabilities - at least if you accept the definition from the people who coined the term.

Easy access to implicit state is actually among FRP's weaknesses, IMO - the reason it so easily allows subtle space-time leaks. It's something I've strongly rejected from RDP. (I don't reject state, I just keep it explicit and external.)

Physics is certainly an interesting approach for a presentation layer, but the notions that it's a productive, predictable, scalable, or effective approach are a lot more dubious. Either way, I'm not really rejecting it, just saying that they shouldn't be tightly coupled - i.e. use an intermediate, explicit data layer to model the widgets.

Whether stateful or not, can

Whether stateful or not, can I express a physical constraint in FRP, say a spring between two point masses using hook's law? I'm going to say "no" but please let me know if I'm missing something.

FRP bindings are generally uni-directional, there is no feedback from a "previous" value, and they are also unique: a receiving expression (function argument or otherwise) can only be bound to one providing expression at a time. And let's not even begin to think about collision resolution... So I would claim that FRP (as is) cannot do physics, and its not just a matter of having or lacking state.

Modeling physics

You can model (simulate) two point masses with a spring between them in FRP. Perhaps you should read an old presentation from Hudak, which discusses FRP modeling of differential equations for robotics.

As to whether you can 'express a constraint', I imagine that would depend on how you're modeling the point masses. FRP is not the same as constraint programming, and is not very good at 'pushing' a constraint.

FRP generally does allow feedback from a 'previous' value. This is achieved by delaying a signal, e.g. by a given delta-time. You can generally duplicate a signal, delay one half, then combine the delayed element with the non-delayed element, and this essentially serves as feedback from the past. I often see discussions about ensuring FRP expressions are 'causal', meaning that they strictly accept feedback from 'previous' values, rather than future values.

Beyond that, fixpoint loops are also possible to express in the more general FRP models. A fixpoint loop with delay is a basis for state. Integrals are, then, the same thing with an infinitesimal delay.

(I'm not saying this is a good thing. Access to the past makes the whole startup issue somewhat funky since we must model signals back to negative infinity. And internal state causes problems for resilience, persistence, and upgrade. I chose the opposite direction here with RDP: RDP is effectively anti-causal - you can anticipate the future by a fixed finite amount, but have no access to the past without an external state model.)

I think you've been severely misinformed about FRP, or perhaps have allowed your preferences to bias your definitions, and you should review the original papers on Fran and similar work because I think you unlikely to accept my word for it.

(I was once misinformed, too. My introduction to FRP was through PvR's CTM, but this gave me the wrong impression. When I finally sat down and learned real FRP, I was disappointed with it - it was nothing like the 'distributed function' I had been envisioning. So I invented RDP.)

So I am quite wrong,

So I am quite wrong, but...

You can feed the previous value into a FRP expression but can you preserve modularity? What if I have a point mass connected to two other point masses via two springs? How do I represent those two springs in a modular way? Most physics engines will just accumulate up all the impulses per time step, FRP requires that there is one equation in one place somewhere. Is that an accurate characterization?

FRP is not 'openly' modular.

FRP is not 'openly' modular. You'd need to model one big environment containing all the masses and springs, and you will ultimately have one big fixpoint equation that captures the evolving state of the whole system. However, I don't believe this is less modular than a shared physics engine. Either way, you still have a common element responsible for detecting collisions and accumulation of impulses.

If you want the external ability to add new masses and springs and subsystems thereof, you will need the foresight to include a control signal with events of the form 'add a mass here at time T' and 'add a spring there at time T+1s', possibly as a function of user input. This way, the fixpoint has already captured all these additions over time. (This is also how you'd model, for example, client connections to a web server modeled in FRP.)

I would say that FRP requires foresight or discipline to maintain modularity and extensibility. It is all too easy to close a fixpoint loop too early. Modularity is not a strength of FRP (nor of most pure functional programming).

If you want the external

If you want the external ability to add new masses and springs and subsystems thereof, you will need the foresight to include a control signal with events of the form 'add a mass here at time T' and 'add a spring there at time T+1s', possibly as a function of user input. This way, the fixpoint has already captured all these additions over time. (This is also how you'd model, for example, client connections to a web server modeled in FRP.)

This requirement sounds like it is potentially exponential with respect to combination possibilities and code size in any open ended physical simulation (say, that supports user-generated content). For example, think about expressing a game like LittleBigPlanet.

A physics engine is very modular in that forces can be applied (or force generating constructs created) at separate locations in source code, the simulation will automatically adapt to the new status quo.

My focus on modularity is perhaps why I've never really grokked FRP: I always thought of signals as being very concise reactive state abstractions, this being their main benefit to me.

Modularity and FRP

Yeah. Open modularity, extensibility, and external service composition are among my big issues with FRP. If you want both physics (state) and extensibility, FRP can work but it becomes awkward. You'd probably need to do FRP in a physics monad or something to make this work well, and that might not seem much like FRP anymore.

My RDP avoids these modularity issues, but does so by constraining some of the constructors FRP allows. I believe RDP is a model much closer to what you've sought and built than FRP.

That's not really true,

That's not really true, because you can define the velocity of each object as the integral of the sums of all forces from other objects in the simulation. If the list of objects is dynamically changing, FRP will keep track of the changes for you.

Is this the best way to write a physics engine? Well, probably not.

The Rendering Equation

In computer graphics, there is a theoretical holy grail known as "the rendering equation". It doesn't really exist except in mathematician's heads when modeling scene rendering problems.

The Rendering Equation is the system of equations for perfect scene rendering.

For example, many Ph.D.'s in computer graphics are given out for figuring out "hacks" that improve image quality using naive rendering techniques that don't use global illumination sources. These hacks disintegrate for any model that needs robust, global lighting... such as a curved surface getting hit with multiple light sources, especially when the object is close to the viewer. To compute the correct lighting simply isn't possible, because the hacks require fixed encodings/assumptions about the scene and what the objects are.

My major mental blocking point when hearing about FRP was precisely this sort of problem. Even basic questions like, "How do you draw a smooth line?" are actually simplifications of the Rendering Equation question: "How do you draw a 'perfect' scene?" For smooth lines, Newman and Sproull wrote textbooks in the 1970s on Computer Graphics and detailed various techniques where the right technique depended on various hardware factors -- they explained in detail down to the hardware level how to resolve issues like glitch freedom. I wonder if the problems we try to model today are simply too hard to be as rigorous with? Otherwise, why don't I see it being done?

The multi-light curved surface problem

such as a curved surface getting hit with multiple light sources, especially when the object is close to the viewer. To compute the correct lighting simply isn't possible, because the hacks require fixed encodings/assumptions about the scene and what the objects are.

How does multiple light sources make the problem any harder? Are you worried about photon / photon collisions not being modeled correctly?

And as far as I know, global illumination is a solved problem. Ray tracing, for example, can get you the right answer to within machine precision (as far as I know). Getting the right answer in a reasonable amount of time makes the problem harder. And of course it's rendering an idealized mathematical surface, not a real world surface. Is that what you're getting at?

Or maybe you're saying that a closed form solution isn't possible?

Reasonable time is the challenge

Philipp Slusallek is famous for being one of the computer graphics researchers who, when given such hard problems, can figure out ways to solve them for specific companies on specific projects. I am not aware of anybody who has made ray tracing a commodity that we can ship to millions of consumers. That is one of Philipp's goals, of course. See the Wikipedia article on ray-tracing hardware.

How far off are we? I am not an expert, so I don't know. I am just a computer graphics groupie.

on the cusp

see OnLive; see Intel's ray traced Id games. it will be done 'in the cloud'.

Yeah I've actually been

Yeah I've actually been curious about your work here -- we had been whiteboarding physics based layout and animation for our parallel browser ~4 years ago before settling on more traditional constraints.

I'd like to see a system with both physical and logical constraints. Do you view physical constraints a big step forward for static layout, or as primarily an animation thing?

Physics-based layout is

Physics-based layout is probably about:

  1. Animation for free, I've already covered this point.
  2. Natural transitions: not just animated transitions, but animation that appears intuitive to the user. Designers spend a lot of time on intuitive animation that informs the user about what is going on and is not just blingy; with physics the right kind of animation should just fall out.
  3. Usability on the programming side: it is potentially easier for programmers/designers to reason about layout using metaphors that are more physical (e.g., springs).
  4. Usability and flexibility on the user side. For similar reasons, it can be easier for users to manipulate and customize layouts when the rules are physical.

There is not much point in applying real physics to CSS, or even to general icon-based UIs; we are satisfied with fairly fake physics like scroll whiplash. But for a NUI, I would argue that physics is essential.

For 3, 4, I think I've seen

For 3, 4, I think I've seen this implicitly accepted in another form: various objective functions are used to minimize change when switching between constraint solutions (e.g., for edits), so Newtonian physics are a pretty good basis.

Yes, you'd have to do a

Yes, you'd have to do a taint/impact analysis to see where the value gets used. However, I argue you don't have to worry about it (as much) in this formulation; any code consuming the value must explicitly compose it over time. E.g., consider concurrent animations: with effects, you might have a weird flickr when two occur simultaneously, while in the value-based approach, you can't implicitly compose. You'll have to decide whether the composition is to sequence, add coordinates, prioritize, etc.

I'm guessing you mean that callbacks can glitch? So if two callbacks are called in the same time instant, then the result depends on which order the callbacks were called. This is indeed a problem with callbacks, and one that I think can be solved in a combined FRP/callback system by only allowing callbacks to send values to things that they own uniquely. If you want multiple callbacks to affect the same thing then you have to introduce an explicit merge.

I agree, but don't view it as that big of a deal. Shriram's paper about lifting Racket's GUI system showed these aren't far apart using macros. We did find with just lambdas for CSS. There'll be little differences, but modern languages, at least dynamic ones, don't make this too hard. I also agree that there's still some rough edges between the vision of FRP and what we can do today in practice -- and pursuant to this thread, a lot of that becomes clear when you actually write applications :)

Shriram's paper doesn't describe a declarative FRP formulation of an imperative OO toolkit. It describes something in between: the properties of widgets are exposed as FRP, but the composition of widgets is not. In their system you create a widget like (new thewidgetclass% [parent theparent] ...args...). This call imperatively mutates the parent to contain a new widget. So you don't define a window's contents as a time varying value of widgets, instead of mutate the contents of the window imperatively.

An example problem that I think exhibits what I'm getting at is the following. The first app: Create a counter application that consists of an "increment" button, a "decrement" button and a label displaying the current count. The second app: Now create an application that has an "add counter" button, a "remove counter" button, and a list of counters, and a label displaying the sum of all the counters. With an OO pattern like MVVM this is very simple, and you can encapsulate a counter widget in its own class and compose multiple of those to do the second application. With data binding you can even specify the sum of all the counters declaratively as sum(map(counters, c => c.value)) (of course as usual with a ton of ceremony in languages like C#). I think that if you are working with a toolkit with a pure FRP interface, then you will find that the abstractions that FRP offers are not sufficient and instead you want to build additional non-obvious abstractions on top of FRP.

A GUI widget must somehow interact with the layout of other widgets, and that isn't just monadically passing a canvas. Abstracting out the layout system from the widget system is actually already a really big step that I believe is underappreciated. How to do that is tricky, including how the widget system should interface with it. E.g., in CSS, people often go outside of the layout system with new layout widgets in JavaScript, which doesn't really work well with the rest of it -- something like jQuery (with its extensions) is like reimplementing a browser inside your browser. Instead of doing that, how do we build the browser to beginwith? What happens when part of the layout is in scripts/FRP and the rest a restricted constraint language? Once my current deadline is done, I have some fun ideas I want to play with there :)

Yes. If the output of your FRP application is a time varying tree like you described earlier, then you can write a renderer as a normal function and map that over the stream of trees. That's what I meant by largely orthogonal from FRP. You do want the tree language to be expressive enough, so that if need be you can write your own layout manager, for example by allowing a box in which widgets are positioned with absolute (x,y) pixel coordinates. This may not give you enough expressiveness, because often you want widgets lower in the tree to affect widgets higher in the tree and vice versa. In traditional layout managers this is done by doing multiple passes over the layout. For example in HTML/CSS I think the min width is first computed bottom up and then they width top down, then the layout manager uses that and a font library to determine the height of widgets with text bottom up. So you'd want a box in which you want to use a custom layout manager to be able to have a say in all the passes the layout manager makes. Perhaps you could store closures in such a use-your-own-custom-layout-manager box, one closure for each pass so that you can fully customize the layout behavior within the constraints of the passes the layout manager does. For top down passes the value of the calculation of the parent is passed in to the appropriate closure, for bottom up passes the appropriate closure is called and will then use the values of the children.

An alternative approach is to define penalty functions for each box. The penalty is a real valued function of the position and of the width and height of the box. The layout manager then tries to find the globally optimal solution by doing numerical optimization. If you restrict the penalty functions to be convex, then you can do this very quickly by doing convex optimization. I think something like this has been done before, but instead of allowing general convex functions they used linear programming.

The second app isn't hard --

The second app isn't hard -- once you get the right FRP abstractions. In this case, I found it natural to lift JavaScript's array: it supports push events, pop events, slice events, etc. The second application contains a time-varying array of elements: it can display the elements, compute the current sum over them, etc. I think there is a problem in that you must be an FRP programmer, and know the FRP idioms, which aren't well established. Everyone has written enough C# that they've observed how others do it or figured it out for themselves. There aren't as many people in the FRP community and it is rather fragmented, and I wouldn't expect this sort of thing to be in a paper -- its sort of like the early days of AJAX libraries, where everyone reimplemented everything on their own :)

BTW, while I haven't read it for awhile, my guess is that in the PLT Scheme / Racket paper, they might have wanted to extend the existing APIs to support time varying values. So, if the old API specified parent as an argument, the new one should too -- except now also support a time-varying parent. For our JS DOM stuff, and I believe in the Haskell and O'Caml libraries, we use a functional form, e.g., "BOX(BOX(TEXT("hello"), BOX(time)))", so the parent widget takes its children as arguments. To use optimized browser APIs, we found it important to implement the composition imperatively, but it's still exposed declaratively.

Could you sketch the details

Could you sketch the details of your solution a bit further? I'm thinking you define a lifted array as taking an event stream of mutation events, say Push(x) and Pop. Then you create this stream as a merge of the add and remove buttons:

counters = buildArray(merge(map(c => Push(X), addButton.clicks),
                            map(c => Pop, removeButton.clicks)))

Is that right? What are you passing in at the X, and how are you going to connect it up to the appropriate counter widgets? How are you going to create the UI? As a function of counters? Can you reuse the same definition of the counter from the first app? I've considered multiple approaches and even this one, but every time I really try to think it through to the last bits it doesn't add up as I thought it would and end up requiring imperative state, or it becomes exceedingly ugly.

Everyone has written enough C# that they've observed how others do it or figured it out for themselves. There aren't as many people in the FRP community and it is rather fragmented, and I wouldn't expect this sort of thing to be in a paper -- its sort of like the early days of AJAX libraries, where everyone reimplemented everything on their own :)

If not in a paper, I'd still expect this stuff to be written down somewhere. The idioms for using a language are important, especially if they are non obvious as with FRP (to me they are, but it could just be that I'm too stupid). Perhaps you're right that the C# idioms for creating GUIs are not less difficult, but I don't remember struggling with them at all.

BTW, while I haven't read it for awhile, my guess is that in the PLT Scheme / Racket paper, they might have wanted to extend the existing APIs to support time varying values. So, if the old API specified parent as an argument, the new one should too -- except now also support a time-varying parent. For our JS DOM stuff, and I believe in the Haskell and O'Caml libraries, we use a functional form, e.g., "BOX(BOX(TEXT("hello"), BOX(time)))", so the parent widget takes its children as arguments. We found it important to implement the composition imperatively, but it's exposed declaratively.

If you treat the parent as a time varying value that doesn't really make for compositional GUIs. As for the Haskell libraries, those that I've seen don't allow you to write GUIs compositionally (by passing the children to the parent as arguments, resulting in a similar way of nested expression as in HTML), or they use imperative callbacks, or both. Flapjax AFAIK extracts stuff from GUI elements by element ID, effectively routing everything through a global mutable hash table. But if you could show how to do the counters in a functional way with Flapjax, that would be great.

Flapjax AFAIK extracts stuff

Could you sketch the details of your solution a bit further? I'm thinking you define a lifted array as taking an event stream of mutation events, say Push(x) and Pop.

Pretty much -- this is actually a common paradigm in languages like Max/MSP. There are all sorts of things you want to do with an array, and its just as true when reacting to things. So a poor approximation (e.g., due to some calls being incorrectly handled destructively) would be:

var myArray = commandsE.collectE([], function (c, acc) { return [][c.name].apply(acc, c.args); });

There are further fun questions, such as handling different types of delta computations. Not all the semantics of all array methods are obvious and others don't appear to have needed it, so this never made it into our standard APIs.

Here's an example of writing your app without it:

function makeWidget () {
  increment = button("increment");
  decrement = button("decrement");
  commands = 
    $E(increment, 'click').mapE(x => x + 1).
    merge($E(decrement, 'click').mapE(x => x - 1));
  return {
    modelB: commands.collectB(0, acc command => command(acc));
    view: DIV(increment, decrement) };
}


var addWidgetButton = button("add widget");
var widgetsB = $E(addWidgetButton, 'click').collectB([], acc _ => [ makeWidget() | acc]);
var sumB = widgetsB.foldB(0, w accB => sumB(accB, w.modelB)).switch();
page = HTML(addWidgetButton, map(widgets, widget => widget.view), "Current Sum: ", sumB );

The collect in the "widgets" definition is where I might sneak in the fancier array (for performance, brevity, etc.).

Flapjax AFAIK extracts stuff from GUI elements by element ID

We do that to enable "unobtrusive" or "aspect" JS styles. E.g., the $E function above, and the insert primitives. $E should also work directly on elements -- the ID version (used to, at least) just proxy to the node version after resolution!

Finally, if not clear from above, imagine some time-varying list, such as a shared task list that can shrink or grow, or a list that appends on every click. You can still declaratively compose:

UL(map(LI, expandingList))

You might be referring to the DOM's restriction that something can only exist in one place -- we decided to preserve these semantics to be useful in the browser environment, but that was an explicit, optional decision that you wouldn't see elsewhere.

Flapjax doesn't preserve equalities such as "behavior (array (behavior int)) = behavior (array (int))". Thus, for the append above, I was careful to destructively append to the existing array of widgets. Neel's FRP work on the frontpage relates somewhat to this. I actually found incremental array behavior to be the tricky part, which relates to Acar's work on traceable data types: all fast FRP implementations historically do that (encapsulate scene graph, DOM, array), but implementing ADTs for individual domains is still the general incremental algorithm design problem.

Thanks. So Flapjax is not a

Thanks. So Flapjax is not a pure FRP UI interface. To see this consider the code:

increment = button("X");
decrement = button("X");

If button("X") were pure, then this would be the same as:

increment = button("X");
decrement = increment;

But it is not. This solves the statefulness problem by using explicit mutable state (you'd have to write all the code in the IO monad in Haskell). So for now this still stands: to write GUIs you either use mutable state, or you need to invent a new abstraction on top of FRP, or you get something ugly.

To keep it pure you have to pass the events coming from the parent widget in to button() (and get something ugly). Or you need to treat button("X") as a button constructor that still needs a parent passed to it to create a real button. The problem then is that you can't say increment.clicks (and now you get something that needs new abstractions, you'll probably end up with something like Fudgets).

We do that to enable "unobtrusive" or "aspect" JS styles. E.g., the $E function above, and the insert primitives. $E should also work directly on elements -- the ID version (used to, at least) just proxy to the node version after resolution!

I now see that the statefulness is coming from somewhere else.

Flapjax doesn't preserve equalities such as "behavior (array (behavior int)) = behavior (array (int))". Thus, for the append above, I was careful to destructively append to the existing array of widgets. Neel's FRP work on the frontpage relates somewhat to this.

That looks like it is related to what I'm looking for, I will definitely read it :)

I actually found incremental array behavior to be the tricky part, which relates to Acar's work on traceable data types: all fast FRP implementations historically do that (encapsulate scene graph, DOM, array), but implementing ADTs for individual domains is still the general incremental algorithm design problem.

Yes. You can get quite far with just an incremental array. OOP GUI frameworks usually only have something like this (often called ObservableCollection).

You wouldn't write it in the

You needn't write it in the io monad (by neel's linear typing, as far as I could read the wiggles ;-) Again, we didn't *need* to do this: we preserve the underlying resource abstraction because developers in our domain need to think about it.

As a sample alternative, we could create pure GUI ASTs rather than reuse the DOM for first-class GUI rep. The semantics of viewing ( inserting via aspects or inplace through templates) would be to copy-on-view, avoiding the linear DOM
constraint and preserving even ref equality. This would likely suck if you wrote code outside our library, look weird (typing in one box gets duped in another), etc. -- again, I view our approach as an extension or careful embedding, and not far from the pure variant conceptually. Our goal isn't pedantic purity.

There is implicit state and non-determinism -- something like $E should be different per widget instance. What you view as an instance -- box in the displayed page, or underlying ast (call to GUI lib) is a choice. You can make it explicit (neel's stuff), but the code you write won't change afaict (but tool support might, and you might further restrict code based on it.)

In Haskell you would have to

In Haskell you would have to use a monad, but in a specialized DSL indeed you wouldn't.

As a sample alternative, we could create pure GUI ASTs rather than reuse the DOM for first-class GUI rep. The semantics of viewing ( inserting via aspects or inplace through templates) would be to copy-on-view, avoiding the linear DOM
constraint and preserving even ref equality. This would likely suck if you wrote code outside our library, look weird (typing in one box gets duped in another), etc. -- again, I view our approach as an extension or careful embedding, and not far from the pure variant conceptually. Our goal isn't pedantic purity.

There is implicit state and non-determinism -- something like $E should be different per widget instance. What you view as an instance -- box in the displayed page, or underlying ast (call to GUI lib) is a choice. You can make it explicit (neel's stuff), but the code you write won't change afaict (but tool support might, and you might further restrict code based on it.)

I agree that this might be a good approach, and I used this approach in an implementation in Python with Qt (there I did what you mention: if you use a widget in multiple places then the library synchronizes the different instances on the screen). But this does not ensure referential transparency. As with Flapjax you cannot do common subexpression elimination on identical subexpressions, and in general the equational laws of pure functional programming don't work any more (you may call this pedantic purity but it is a problem that you are back to reasoning about functions as if they have side effects -- because they do). So while Neel's method makes the problem explicit, it does not confirm the hope that many have that FRP is the way to do GUIs in a purely functional way (at least not yet).

Copy-on-view doesn't result in a pure interface either as far as I can see, because then how do you get the events from the widgets?

b = button("X")
cs = b.clicks
horizontal(b,b)

Now what is the stream cs? Furthermore the previous snippet is obviously not the same as this one:

b = button("X")
cs = b.clicks
horizontal(button("X"), button("X"))

But if button were pure it would be. So you need to have a way of hooking up the event streams just before the button gets viewed. A first step is to let horizontal() return not only a composed widget, but also a tuple of event streams coming from the buttons. Then you unpack that tuple and hook up the event streams. But then you shifted the problem to horizontal. If you take this approach all the way down to the main window of the application, the problem is technically solved but you have a very hairy mess of data flow. You can tackle this by providing new combinators for composing both the views and event streams from widgets. This is what Fudgets is doing, but that can hardly be called existing FRP without new non-obvious abstractions because Fudgets is not one but two PhD theses.

In Haskell you would have

In Haskell you would have to use a monad, but in a specialized DSL indeed you wouldn't.

There's monad the general idea (is everything using identity a monad? hurray?) and the feature. E.g., I can imagine decoupling a notion of identity from a GUI ADT; any useful GUI instance would combine the two.

Copy-on-view doesn't result in a pure interface either as far as I can see, because then how do you get the events from the widgets? ... But this does not ensure referential transparency

Yes, "copy-on-view" is ambiguous -- I meant it in that the copies should behave the same except for view (which is copied). You see 2, but they are otherwise the same: as in, the same click stream.

You might then want to then extend the API to help optionally differentiate from which. More generally, I bet you'll still want identity somewhere due to the domain, but it can be encapsulated and reasoned about. We just did it earlier in the API. The DOM starts to make more sense, right? We can more carefully pin-point where in a typed language to enable compiler-level reasoning. Again, I'd argue it doesn't matter much at the user level (or maybe is detrimental), but might at tooling. I don't buy it for performance in traditional systems -- general optimizations are sub-par -- but there are other reasons.

In defense of Neel's work, he takes the DOM as the assumption. I've only read a few papers on linear types; I'm not sure what happens when, for example, you use the DOM as the model and the GUI as a function of it: I suspect you'll still see a lot of beneficial functional reasoning.

This is what Fudgets is doing, but that can hardly be called existing FRP without new non-obvious abstractions because Fudgets is not one but two PhD theses.

I don't understand what you mean. "Most of the work on Fudgets was done in 1991-1996, but Fudgets are still maintained" -- the various variants of FRP descend from the (wonderful) Fudgets project. Pretty much every FRP system does it differently, and for different reasons.

At this point, I think we're talking about different sides of the same coin. I agree that Flapjax isn't pure (cleanly interacting with JS should be an alarm bell ;-)), but you can write code in a pure way up to DOM, which (by Neel's work) can be explained with squiggles. We can drop the DOM requirement, but you'll still want identity due to the domain, but, if we're pedantic, I believe we can keep pushing it down. An example of this is the attribute grammar work I'm doing, which casts the layout logic as tree folds / catamorphisms.

Perhaps the subtlety is whether this distinction is visible across the APIs: if we fully implemented the GUI low-level functionally, but used the copy-on-view interface, would that solve the concern? The user would then be free to recreate the DOM / identity handling as needed.

Btw, rereading my replies, they sound a little combative or defensive, but they're not intended to. I'm trying to explain careful design decisions -- some of which, apparently, have been subtle enough that people have since written multiple papers to explain and improve.

Neel's and your approach

Sorry, I meant "the IO monad" instead of "a monad".

Btw, rereading my replies, they sound a little combative or defensive, but they're not intended to. I'm trying to explain careful design decisions -- some of which, apparently, have been subtle enough that people have since written multiple papers to explain and improve.

Your replies are insightful and helpful :) I can also see how my replies sound combative and/or stupid (I'm obviously just a FRP dilettante), but these too were not intended as such. Let me try to explain my concern more clearly.

Neel's and your approach with Flapjax is good, there is nothing wrong with using implicit state or identity. This is especially true if you have to interact with something external such as the DOM, but even if you don't it makes your code cleaner if state or identity is really what you need. I'm just interested in what a FRP GUI library that does not use such things would look like, i.e. a pure functional GUI library. You could view it as a library that realizes the old FRP dream of pure functional GUI programming, or as a library that has to live within arbitrarily imposed purity constraints ;)

Suppose that the interface of the program to the world is as follows. We are using a pure functional language, e.g. Haskell. The input to the program is an event stream of all incoming events: mouse clicks/movements and key presses. The output of the program is an image behavior that gives the GUI output of the program. FRP primitives are available as well as graphics primitives for creating images. My question is: how can we design a GUI library in such a situation?

One way is to create a traditional GUI toolkit with identity and state on top of this by modeling the entire program as a giant imperative state machine and doing a single fold of the state machine over the input event stream. So we don't really use the provided FRP system at all except for one fold of Event x WorldState -> WorldState. This way we create our own artificial imperative world on top of the FRP interface to the outside world (exactly the reverse what FRP was intended for). We could even use a monad to make threading the WorldState through the program more palatable. We would then do something like Flapjax's and Neel's approach by implementing a traditional imperative GUI toolkit and implementing a new FRP system inside the imperative world that interfaces with our imperative GUI toolkit.

A more natural approach would be to create primitives for making widgets directly on top of FRP.

How would we implement a textbox? We could do something like this:

type TextboxState { cursor : int; text : string }

action keypress state = 
  match keypress with
  | Left -> { cursor = state.cursor-1; text = state.text }
  | Right -> { cursor = state.cursor+1; text = state.text }
  | Letter c -> { cursor = state.cursor+1; text = insertAt cursor letter state.text }

letterHeight = 10
letterWidth = 4

display state = 
  let cursorX = state.cursor * letterWidth
  rectangle 0 0 (letterWidth*state.text.length) letterHeight `over` 
  drawString state.text `over` 
  line cursorX 0 cursorX letterHeight

main keyboard mouse = mapB display (foldE action keyboard)

Now that we have one textbox, how do we display two textboxes vertically? Lets say for simplicity that we want the keyboard input to go to the textbox that the mouse is hovering over.

main keyboard mouse = 
  let topKeyboard = filterE (fun e -> mouse.Y < letterHeight) keyboard
  let bottomKeyboard = filterE (fun e -> mouse.Y > letterHeight) keyboard
  let textbox1 = mapB display (foldE action topKeyboard)
  let textbox2 = mapB display (foldE action bottomKeyboard)
  map2B (fun t1 t2 -> t1 `above` t2) textbox1 textbox2

We can abstract the pattern:


type Widget { display : 'state -> ImageB; action : InputEvent -> 'state -> 'state }

vertical a b keyboard mouse =
  let topKeyboard = filterE (fun e -> mouse.Y < letterHeight) keyboard // use a static height for simplicity
  let bottomKeyboard = filterE (fun e -> mouse.Y > letterHeight) keyboard
  let aDisplay = mapB a.display (foldE a.action topKeyboard)
  let bDisplay = mapB b.display (foldE a.actino bottomKeyboard)
  map2B (fun ad bd -> ad `above` bd) aDisplay bDisplay

textbox = { display = display; action = action }

main = vertical textbox textbox

As you advocate, in reality you might want to create a specialized layout manager instead of doing layout in FRP, but the same pattern remains that the layout manager's job is twofold: (1) rendering the widgets in the right place (2) routing/filtering the right events to the right widgets.

This is all going well. Suppose now that we do not only want to combine non-interacting widgets, but we want to give widgets dynamic inputs and outputs. For example we want to get an event stream of the text property of the textbox, and we want to give an event stream of "setText" events as an input to the textbox to let the application set the text programmatically.

Lets do it the way that something like Flapjax or Neel's approach would do it: we make textbox a function that takes the input event and returns both the output event and the textbox's graphical representation:

textbox setText = 
  let textChanged = ...
  let display = ...
  (textChanged, display)

But it is not possible to fill in the ... so that this will work, because textbox()'s intended behavior is not a pure function (as mentioned earlier). A textbox function call does not have access to the appropriate input events, and so cannot return the thing we want. Clearly another approach is needed for a pure FRP interface to a GUI toolkit. The pure functional FRP research has focused on animation and games. Indeed pure FRP works nice in those areas. But for composable GUI programming something more is needed. How do we solve this? It might be obvious how to do it to someone like you who has more experience with FRP, in that case I'd be very interested to know how. If it is indeed not obvious how to redesign this then I think this is an important research area, at least for the pure functional folks. Perhaps you're right that identity is inherent in the domain of GUIs, and it is best to acknowledge this explicitly. In that case the best approach might be like the existing works, namely essentially coding the GUI in the IO/ST monad while still using FRP as much as possible (i.e. the Flapjax approach). This feels a bit disappointing in a language like Haskell though, especially if you remove the constraint that you have to interact with the DOM or another external GUI toolkit.

Resource Directories

A concept I've found useful for RDP is directories (or spaces, volumes). The essential notion is that every resource is given a unique identifier explicitly via the source code - i.e. one simply does not use 'button("x")', but rather something closer to 'button(path/foo/bar/x)'. This works well for identifying resources in an abstract infinite space thereof.

Since RDP is referentially transparent (per instant), all resources named by 'button(path)' are the same button. But developers are easily able to partition the path (path/foo, path/bar) in order to guarantee uniqueness where they need it. This partitioning is made securable by modeling the path as an ADT and unidirectional (parent-to-child only).

An advantage of embedding paths in source-code is that they are very stable. Reordering, inserting, and deleting source code is unlikely to affect the paths to most subprograms, and therefore does not upset state resources identified by those paths. This stability is valuable for live programming, runtime upgrade, and orthogonal persistence.

For CSCW, application mobility (e.g. desktop to laptop), and the like it's a good thing if the model has stable shared state and copy-on-view - i.e. where typing in one box is duped in any other view of the same resource. This simplifies things.

The essential notion is that

The essential notion is that every resource is given a unique identifier explicitly via the source

You have the same problem of having to thread through the ability to create a 'fresh' variable -- in this case, it's what to put into that identifier. You pushed the problem to the programmer, who will build a framework to avoid doing it all the time, and we're back at square one.

I agree that a unified resource naming model is better than DOM IDs, however :)

For CSCW, application mobility (e.g. desktop to laptop)

I was initially attracted to the idea for a variety of reasons, this one included. However, after playing with pen-and-paper examples, I was left viewing it as a "I'll believe it when I see it" -- it may be a matter of relearning paradigms, or it really may be a tedious wrong-default. I don't know of evidence supporting your claim (maybe X server?).

Fresh identifiers

I disagree with your hypothesis - i.e. that this places an undue burden on the developer. From working with pseudocode, it does not seem any more difficult than assigning variable names. A little syntactic sugar helps, though.

Anyhow, there are ways to work around it. User-defined syntax could assign fresh names or symbols, if necessary. But the stability advantages are considerable if the names are ultimately rooted directly in source. I think developers will accept naming state resources themselves when it results in better support for continuous compilation, live programming, visualization, debugging and error messages, etc.

I don't know of evidence supporting your claim

You do. Consider filesystems, for example, and the stability / persistence / upgrade / shared views / cooperative work advantages we gain from keeping state in a filesystem rather than creating 'new' variables in code. I am certain you know these advantages very well, even if you haven't considered a rigorous application in programming language design.

(Similarly for databases, though I used filesystems as an example because the connoted relationship to 'paths' is clearer.)

There is plenty trouble with filesystems. The most immediate trouble is that they don't support first-class language types (such as functions) and the appropriate properties (such as laziness, type safety, and integration with language concurrency control). Security is another issue, and is usually achieved poorly (i.e. no directory capabilities for secure partitioning of state).

But these problems can be fixed with a language-level 'model' of an external filesystem - perhaps provided as part of an abstract machine (VM, runtime) as an alternative to heap and 'new'.

File systems, at least the

File systems, at least the ones I use, are counter examples: you can have anonymous tmp files.

Anyways, that's not what I meant: file systems aren't user interfaces. Red herring. Jules discussed a widget system where he played with this, and X window buffers are also close. I simply haven't seen a compelling UI system that does this, and as far as speculation, my twiddling around was awkward.

Too concrete

you can have anonymous tmp files

Sigh. Sure, you can make anonymous temporary files that lack these advantages. But I think you're intentionally missing the point, here. Would you argue that humans are a counter-example to an erect walking posture by pointing out that 'there are people with no legs, and you can always slouch'?

file systems aren't user interfaces

I disagree. File systems are, effectively, user interfaces. This is especially embraced in Linux, where even devices and video buffers and user actions (icons, executables) are modeled in the filesystem. What is a button but an icon? Modern file systems typically provide simultaneous views to multiple users and user-agents.

You should probably look at a file systems and browsers and see something better than your average UI system, modulo performance. RESTful UI models provide another example.

I find these systems quite compelling, along with more zoomable user interfaces - which, by nature, expand on these notions.

Sigh. Sure, you can make

Sigh. Sure, you can make anonymous temporary files that lack these advantages.

Yes, and I do -- the issue here is, primarily, this is a red herring, and, secondarily, I expect the file system to support anonymous files. It must provide such a mechanism, which brings backs the monads or what have you (global/threaded state).

file systems aren't user interfaces

I disagree.

You're completely right. Implicit from context, I meant file systems aren't frameworks for creating graphical user interfaces. I don't know why domain abstractions for manipulating files are de facto equally applicable to user interfaces, especially when I have more specific knowledge available.

Hope that makes more sense!

Stable Identity

It wasn't manipulating files I was talking about, or at least that wasn't the point of my argument.

Rather, I was discussing identifying and accessing resources - via stable paths, stable state, externally observable and extensible. I grant that file IO is a more than a little bit unwieldy, but that was not relevant to my point. A resource can just as easily be a button or textbox (or something that can be viewed and manipulated as one).

The difference between logical path identity and structural identity is obvious when we have persistence or stability concerns. For example, if we have code 'HBox(Lbl(foo),TextBox(80,1))' and change it at runtime to 'VBox(HBox(Lbl(baz),TextBox(80,1)),HBox(Lbl(foo),TextBox(80,1)))', then what happens to the identity and text stored in the original textbox?

Use of logical paths for each resource would solve that problem, naming resources as a concern orthogonal to structuring them in a view. This allows a solution to the difficult persistence and upgrade problem, and aids cooperative work models, extension, auditing, and the like.

But it does mean rejecting the OOP concept of 'new()', in favor of discovery and manipulation. There are no new widgets, no new state, no newIORef, for anything we want stabilized. Structures instead represent 'views' of external resources.

Oh, and a filesystem could easily be a framework for graphical UI - e.g. each form as a directory, each textbox as a file, each button as an executable, possibly some 'recommended view/layout' file per directory. It isn't optimized for this use, but I think we can learn from it.

Absolute paths considered harmful

It can be convenient to pretend that you have global identifiers, but I think the better way to set that up is with an implicit context to which the identifiers are relative. Even if your context is "the internet", it may still be may occasionally be useful to instantiate it more than once.

Relative paths

Yes, the paths I was discussing are relative. That is: you cannot know where the initial 'path' (in 'path/foo/bar/x') is in any global hierarchy. For security, there is no accessor from a child to a parent node.

I don't actually use paths as arguments, just 'something closer to them'. For my own work, I decided to use the dual. A functional model of what I use might look like:

data Directory p e = Directory 
  { dpath :: p -> (e, Directory p e)
  }

However, I find it valuable to be able to 'reset' a whole directory back to a zero state. This is especially useful for startup of a new application, since it gives me both a well-defined initial state and allows orthogonal persistence without any extra explanations or hand-waving. So I also provide this as a second capability.

Also, a 'pure functional' model of directories proves to be semantically inconvenient - difficult to instantiate an instance of a Directory in an arbitrary monad without infinite allocation! So I model the directories themselves using arrows, which allow for the appropriate effect.

I then cross this with the reactive model by wrapping the arguments and return types signals so they can vary over time. My RDP variation of this model ultimately looks more like:

data Directory (~>) s p e = Directory 
  { dpath :: s p ~> (s () :|: (e :&: s (Directory (~>) s p e)))
  , dzero :: s () ~> s ()
  }

In this case an invalid path - or an actively zeroed path - will return unit instead of an element and a new subdirectory. When dzero is released, all sub-paths return to an initial state as though they were freshly discovered. (:|: is an asynchronous sum, :&: is an asynchronous product, in my generalized arrows model)

I think the better way to set that up is with an implicit context to which the identifiers are relative.

I make all paths relative, and the paths expose 'absolute' opaque capabilities. I do not use implicit context or absolute paths at all.

I tackled and IMO solved this

in OCamlRT [1]. The solution is simple: a widget is represented by a tuple (widget, state); where widget represents the physical layout of the widget, and state is an FRP value (a discrete event stream (e.g. for buttons) or a continuous behaviors (e.g. for text entries) or some combination thereof). E.g. in OCamlRT, a text entry has the type (widget * string behavior).

Widgets which display something are simply functions of some FRP value. E.g. a label has the type string behavior -> widget.

Containers are simply functions from widgets to widgets.

Because all widgets are simple values, and widget constructors are simply functions, a programmer can define "new" widgets simply by composing functions. e.g. a variable label, text entry and button could be composed as:

let my_widget label_value =
  let label_widget = var_label label_value
  and text_widget, text_value = text_entry ""
  and button_widget, button_event = button "Click me"
  in
  vbox [label_widget; hbox [text_widget; button_widget]],
  (text_value, button_event)

On a side note, layout IMO is best done using a simple constraint system. It's almost trivial to implement such in any logic language.

[1] http://fstutoring.com/~chris/programs/ocamlrt2_20100811.tgz

While this is a nice

This is a nice solution in an impure language like OCaml. It seems that functions that create widgets are not pure though, so for example button would have type String -> IO (Widget, Event ()) In Haskell. Previous discussion on this kind of interface.

My largely uninformed opinion of FRP

Semantically (ignoring implementation issues), the way I look at FRP is something like this:

type FR ei ci eo co = (ci -> co, ei -> (eo, FR ei ci eo co))

That is, a functional reactive value is a behavior (mapping from continuous input state ci to continuous output state co), and a rule for state change in response to an input event (ei), producing an output event (eo) as the state changes. The events can be used to indicate observable changes in state, but my decomposition into events is ad hoc and so you may have compatibility requirements between the events you issue and the behavior you report.

Functional reactive values can maintain state in reponse to events (see also: State monad), but are semantically pure and glitch-free. FR values are updated by virtue of being hooked up (indirectly) to the main functional reactive value of the program. It seems like a reasonably simple way to model the types of things that you're asking about, but I wonder if it meets your criteria for being FRP?

(BTW, I haven't looked hard enough to know whether Chris' version is pure or not.)

Oh, I see

I see what was being asked. The issue is just about how you bind names to parts of a reactive value -- naming the parts before they are assembled isn't sufficient. I might be able to say something more intelligent about that, but not tonight.

Final comment

I was entertaining the idea of mocking something up in Haskell this weekend, but I think I'm going to spend my time on something else. I do think the abstraction I'm calling Processes which I've mentioned a couple of times on here is a good fit for doing UIs in a pure way, so I hope I can say something more about that later. In the meantime, I'll just finish out my thoughts:

You obviously can't write code in pure language that looks like this:

let b1 = button "test" in 
let b2 = button "hello world" in
vert b1 b2

The way Widgets are modeled, that would bind b1 and b2 to monadic functions, when you really want to bind those names to respective components of the 'vert' widget. This same problem comes up when trying to refer to layers of a monad transformer stack. Solutions in Haskell include type classes that inject to the correct layer and Monad Zippers, but these are both heavy on the boilerplate needed. Also, Algebraic Effects are relevant.

I can imagine a solution in Haskell that ends up doing binding like this:

let b1, b2 = vert $ button "test" $ button "hello world" 

Even if you can get this working, though, it feels like a big hack.

One solution, of course

is to use a DOM model or identifiers as discussed in the thread you linked. Many popular GUI frameworks already do this (e.g. Android).

You really bump into the limitation of pure functional languages using any approach though, because what's distinguishing *my* button "a" from *your* button "a"? Or from the button "a" I created last week? They are textually identical, so they should refer to the same button, despite running on different machines or at different times, yes? (Yes, I know Flapjax can handle the different time part, but this is obviously not always the intent.)

FRP is *not* FP. There is a notion of state and identity and any sane FRP language must make this clear. (Even if it's as simple as introducing a keyword such as "fresh" which would create a GUID each place it textually appears.)

Staging, Constraints

Rather than 'the compiler', could you express this as staged FRP evaluation, with the constraint solver being an FRP behavior?

I understand that stability is a problem - i.e. supposing we add a new widget to an existing scene, we would usually want a minimal change in the constraint solution to present the new widget. This stability is important for both users (who cannot have stuff shifting too much under mouse) and for performance (i.e. maintain FRP relationships with the unchanged widgets).

Consequently, I've been pursuing stable state models for my RDP work. Among the better options are constraints and generative grammars, since we can often find a new solution with minimal backtracking, and if we do backtrack we can often find a new 'stable' point.

For constraint models, I've been interested in the possibility of presenting a solution such that each variable in the solution is a time-varying (animated) value. That is, the constraint solution is an animation, not a static page. The functions, however, would be simple - i.e. piecewise continuous polynomials of time, maybe some trigonometric functions. (I don't believe we need much richer than 'piecewise discrete' - jumping between constant values - to reap a huge benefit.)

Anyhow, I feel that the staging - i.e. that the compilation occur within the reactive framework - is valuable for rich user interfaces where the set of active widgets is a function of the data (e.g. widgets for each result from a relational query, alert windows) and user inputs (open forms, menus).

Yes, the way I've been going

Yes, the way I've been going about it has essentially been staged: we generate the CSS engine (browser compile time), and later you can interact with it with FRP (JavaScript library). We could even automatically lift the engine to be automatically incremental wrt FRP, but that's not a good idea, you'll want some traditional AG opts there. Anyways, you can replace CSS widget with jQuery widget (and we have ;-)) and it still works: generate HTML5 instead of C++.

I suspect the expressive capabilities are stronger, however -- e.g., interleaving first-class grammars with FRP. It's interesting both as a declarative and performance challenge. This stuff is more speculative, perhaps getting a bit beyond the scope of LtU..

I would love to see a good

I would love to see a good list of sample apps across domains, especially simple apps that are multi-disciplinary (e.g. networking + HCI).

I do feel that 'toy' apps are often deceptive. I often get the impression of someone cherry-picking one square foot of his kitchen floor and saying: look here! this patch of clean surface demonstrates my kitchen is elegant, uncluttered, and capable of supporting multiple cooks concurrently!

However, toys still serve an important role. Most people (including me) are not very good at abstract reasoning without some concrete examples to anchor them.

One example I liked recently for reactive systems: a 'pendulum' problem. Model a pendulum hanging from the mouse in a 2D window. The pendulum must move correctly based on gravity and relative position of the mouse. (An interesting variation would be to model a slingshot, i.e. which accounts for velocity and acceleration of the hand and is sensitive to releasing the mouse button.)