Declarative layout and/or UI languages?

Can anyone point me to any work on declarative languages for user interfaces? I'm aware of the plethora of XML languages, a favourite of web frameworks in particular. I mean a more concise declarative UI or layout language.

I've built a small declarative library which outputs HTML, but I'm not really satisfied with it, so I'm wondering what other work has been done here for describing the high-level user-interface structure.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I'd take a look at work at

I'd take a look at work at UW:

* Adaptive layout
* Context-sensitive layout
* Constraint layout (done w/ researchers at Monash & Melbourne)

You already know about FRP -- vrml and some other layout systems incorporated notions from it. Dan Ignatoff lifted MrEd or whatever the PLT Scheme layout system was for it, and Flex, Flapjax, JavaFX, etc. also looked at including dynamic values for layout systems in their domains, and I think modern smalltalks are also doing so.

More traditional systems look at geometric (CAD), grid (Swing), or flow constraints (CSS). The former make more sense for UIs, the latter for documents and (IMHO) sufficiently complex UIs. Prefuse/Flare show how force-directed notions work in some domains.

There are some papers from the 80s looking at it (Pict, Garnet/Amulet, etc.), but their age shows.

Michael Greenberg wrote a cool thesis on mixing bidirectional programming with UIs and maybe there'll be more on it as he's joined the UPenn group.

Finally, though a bit of a stretch, Processing and Max are being (ab)used for rich interfaces as well.

The field really seems to be stagnating, which is unfortunate. A related area with a similar paucity of good work is linguistic support for animation. A lot of effort is being put into visual front-ends or layout for particular domains (eg., charting), but programmatic control is often also important, as is the ability to integrate with other systems and tools, so a better general foundation for these would be super cool.

Haskell.

Have a look at HaskellWiki's listing of high-level GUI libraries. Stuff is progressing, slowly but steadily: Haskell is too good an imperative language to make the topic an itch needing urgent scratching.

Adobe's Adam and Eve

Try "Adam" and "Eve" from the Adobe Source Libraries; the docs say that they deal with both declarative layout and UI behaviour.

Property models (GPCE 2008)

In GPCE 2008 there was a paper "Property models: from incidental algorithms to reusable components" by researchers from Texas A&M and Adobe, which generalizes the approach of Adam. The paper presents a DSL they are using to for implementing user-interface dialogs. The DSL specifies the variables of a user-interface model, and the dependencies between them. It looks like a great abstraction. Unfortunately, I could not find any pdf outside the ACM digital library.

behind pay wall

Based on the abstract and previous work coming out of the Adam/Eve project at Adobe, this paper looks very interesting. Unfortunately, it is behind ACM's pay-wall :(

Thanks!

google and scholar.google had a hard time finding this.

Property models is

Property models is definitely a good way to handle the view-model interaction. The paper references this link for Adobe's declarative layout library.

I am more interested in declarative layout at this point than the view-model interaction. Since I've had prior exposure to FRP, that already gives me a good place to start. I haven't come across any published work on declarative layouts however (other than CSS I mean).

You might be interested in

You might be interested in the FRP GUI toolkit I designed a few years ago in OCaml with GTK. I never quite finished it, but the broad overview is this:

  • Functions create widget objects and FRP signals associated with them. So for example, the function to create a text field had the signature "string -> widget * string behavior" (where the input string is the initial value), and a progress meter had the signature "float behavior -> widget".
  • Functions compose widgets together. e.g. there was hbox and vbox both with type signature "widget list -> widget".
  • Simple declarative constraints were allowed, to force a set of widgets to all have the same height. This was the mechanism for generating e.g. grids.
  • Each widget had a "strechiness" value that determined how much it stretched when its container stretched (e.g. due to window resizing). The math was done in such a way that nesting containers did not affect a widget's on-screen behavior.
  • Some gobbedlygook I never worked out well displayed a widget on the screen in its own window (remember that widgets can be composite).
  • Functions were provided for interactions between widgets and FRP (e.g. collapsing a widget behavior into a widget). (This could have been better accomplished by replacing "widgets" with "widget behaviors" everywhere, but would have made a mess of the underlying implementation over GTK.)
  • Drawing canvases remained "imperative". In that, the value displayed by them was actually a drawing procedure. So the type of a canvas widget function was typically "(canvas -> unit) behavior -> widget", where "canvas" is some imperative drawing context. This worked well for (at least) GTK canvas, Cairo canvas, and OpenGL (though the latter needed some trickiness).
  • User input was fully functional / FRP. There was a type "input" which represented the current state of a bunch of input devices (keyboard / mouse I think) which was also associated with drawing canvases (so their signature was actually "(canvas -> unit) behavior -> widget * input").

If you like I can see that the source code finds its way to you. That was one university and two disk partitions ago so it's... somewhere.

You probably know about the

You probably know about the FRP work, this is very declarative. I've found that with my recent system, Bling WPF, I don't even need to bother with layout managers anymore: just express all the layout constraints as....constraints. WPF is already semi-declarative (even if you ignore XAML) given its support for retained UI operations and databinding (Bling provides better abstractions for getting at this).

You could check out lots of the OO constraint programming work from the late 80s/early 90s. They are all kind of dated though. I want to revisit these anyways in my next paper, since the ideas have basically been born out in recent UI frameworks (WPF/JavaFX).

My current research is moving towards declaring UIs using physics: you express a bunch constraint-like springs that are then "solved" by a physics engine (e.g., possibly GPU accelerated). Advantages of this approach: a plausible layout falls out of the spring declarations + collision detection and response, while you get natural animation for free. Disadvantage: physics engines can be incredibly unstable, though I'm finding Verlet integration to work nicely enough.

Physics/Constraints

I like the sound of the "laws of user interface physics"; there are probably some nice generalities down that path.

As an aside, I hacked together an AWT layout manager with Choco as a proof of concept after becoming very frustrated with SpringLayout's restriction of a single constraint set per element. Choco is way overkill, but it does work nicely. Finding a nice, natural style in Java for expressing the constraints is on the to-do list.

Animation: From Cartoons to

Animation: From Cartoons to the User Interface by Bay-Wei Chang and David Ungar
(1993)

Do you plan to make the

Do you plan to make the source available?

Layout using drag and drop

In my opinion, the best way to 'declare' the layout for a UI is by dragging and dropping elements with a mouse. See UI designers built into Visual Studio, Eclipse, Qt, etc., etc.
I think that in fact the distinction between designing a UI in an interactive editor and programming is not serving us well. I believe that most software would benefit greatly from two dimensional representations and interactive editors rather than relying on textual encodings. We can accomplish the same tasks and represent the same structures and algorithms in ways that are easier to maintain, understand, more concise, etc.
Using a graphical design tool for building a GUI is just a no-brainer example of this.
I think that the obsession with textual languages is partly due to our general predisposition for serialized language which itself is mainly a result of our physiological limitations (primates communicate largely by uttering sequences of sounds). Others may point to supposed superior abstraction ability of text but I think that is false.

Dragging and dropping

Dragging and dropping elements is a great way to prepare static UI layouts, but as you increase the complexity of the rules and constraints for how to build a UI from data at runtime, it becomes less of a no-brainer. I also think you're overestimating the importance of 2d representations of code. People use text because it works, it's relatively easy to implement, and surface syntax isn't usually what makes problems hard.

Mostly true, but

He didn't say that he was using his language to generate dynamic layouts. Quite often these declarative UI languages are used just as much for static parts as they are for dynamic parts. I believe that if our GUI editors were more powerful they could still help us to model dynamic layouts based on runtime data. And I agree that with current tools text is easier to implement but I disagree about 'surface syntax isn't usually what makes problems hard'. The layers of abstract serialized representation that must be uncompressed, decoded, and held in our minds for each concept does add significant overhead and often these textual forms are so removed from more natural two, three or four dimensional representations that it is an order of magnitude more difficult to work with them.

Sorry for being absent from

Sorry for being absent from the discussion thus far, a sudden influx of work has kind of swamped me. Lots of great suggestions here though.

As for dynamic/static UIs, I imagine you're referring to UIs which change layout and/or structure dynamically, so "dynamic" is not referring to the content. Actual layout logic in my current prototype is practically non-existen, since I'm relying on CSS to do that in my prototype, hence why I'm trying to find out what's out there.

Alternatively, if my prototype proves adequate, I had considered building a CSS-like layer for Windows.Forms so I could use the exact same code for web-based and desktop execution environments. The only code that changes is the program launcher.

I had been planning for some dynamism though, because I would like to transparently support AJAX updates as well. The interface a UI dialog can implement includes update operations which can happen asynchronously. I haven't really nailed this down to my satisfaction though.

I'm not sure how much background on my concrete project would be of use, but all of the suggestions seem pretty good so far, and once I have time I'm sure I'll learn a lot from them. Thanks!

Microsoft technologies

We were actually referring to UIs that are dynamically generated such as those based on some database schema and/or data.
Since you mention Windows Form here are a few thoughts on Microsoft technologies. There are ways to run an ASP.NET on the desktop as well as run an intranet WPF application in the browser so I'm not sure you need to get into a Windows Forms CSS layer (or look at AIR from Adobe). Also I would take a close look at the newer WPF/XAML frameworks and libraries before I assume Windows Forms is the way to go. Since you want to avoid XML there is probably already some type of declarative web library in F#.
But just one more time depending on your goals, CSS and even a nice declarative UI library are mainly a waste of time and you should look for graphical tools that allow you to handle presentation (which Microsoft and others have technologies (far from perfect) that _can_ be used in an efficient way to handle UI layout).

Another interesting one,

Another interesting one, closer to what you're suggesting, is Adobe's Thermo project (might be some demos from Max).

However, some reason, there's been a disconnect between those making IDE-friendly systems and fine-tuning friendly systems, and the divide between between business users and artists general matches it. I think there is a reason. Possibly hinting at it, it took forever for designers to give up pixels on the web (and they still haven't, entirely). Similarly, despite all the wonderful timeline metaphors in Flash, most commercial flash designers didn't really use them, and they became de-emphasized. Rich and interactive design is tough..

We were actually referring

We were actually referring to UIs that are dynamically generated such as those based on some database schema and/or data.

Ah, in that case it's exactly what I'm doing. I'm actually testing the prototype by replicating some of the functionality from a database-driven web app for one of my clients.

There are ways to run an ASP.NET on the desktop as well as run an intranet WPF application in the browser so I'm not sure you need to get into a Windows Forms CSS layer

WPF web execution is via Silverlight, so it's not exactly cross-platform, and certainly doesn't work on mobile devices or screen readers. There's something to be said for the ubiquitousness of HTML.

Also I would take a close look at the newer WPF/XAML frameworks and libraries before I assume Windows Forms is the way to go.

Well, I'm not assuming anything at the moment, I just know that Windows.Forms is available on Mono, but I don't think WPF is yet.

Since you want to avoid XML there is probably already some type of declarative web library in F#.

I haven't found anything. As an aside for any language designers out there, named and optional parameters certainly make many things easier, particularly declarative UI libraries. ;-)

CSS and even a nice declarative UI library are mainly a waste of time and you should look for graphical tools that allow you to handle presentation (which Microsoft and others have technologies (far from perfect) that _can_ be used in an efficient way to handle UI layout).

I've been using the MS tools for over 7 years now, and they kind of leave a bad taste in my mouth at this point. I certainly understand the practicality of many of the decisions they've made, but I think it can be done better. An expressive language like F#, or C#4 for that matter, should make single-language programs possible. By that, I mean programs written entirely in F#, by using declarative libraries for the UI in place of XML.

My prototype in C# demonstrates the UI structure can be done this way, but layout is tricky; I punted on that for now by using CSS. I'll have to read up on all the links people have posted here to see if layout can be integrated this way as well.

I've been using the MS

I've been using the MS tools for over 7 years now, and they kind of leave a bad taste in my mouth at this point. I certainly understand the practicality of many of the decisions they've made, but I think it can be done better. An expressive language like F#, or C#4 for that matter, should make single-language programs possible. By that, I mean programs written entirely in F#, by using declarative libraries for the UI in place of XML.

I agree. This is why I wrote Bling and do all my WPF programming in C#: XAML is optional, the switch between C# and XAML is extremely jarring, and staying in C# makes programs easier to understand. As far as tools that separate presentation from model go, they really haven't been shown to work. I think the Morphic/Naked Object/Anti-MVC approaches are a bit better, where model objects handle their own presentation.

My prototype in C# demonstrates the UI structure can be done this way, but layout is tricky; I punted on that for now by using CSS. I'll have to read up on all the links people have posted here to see if layout can be integrated this way as well.

Really you should take a look at Bling. Even if you are using WinForms, a similar technique could be used to implement signals over there if you could write your own simple databinding engine. Then you could express layout directly as constraints in C# using the lightweight DSL technique that Bling is based on.

If you are really stuck on the web, you might want to consider a Javascript DSL approach (e.g., like Flapjax), but I really don't know what your goals are.

Really you should take a

Really you should take a look at Bling. Even if you are using WinForms, a similar technique could be used to implement signals over there if you could write your own simple databinding engine. Then you could express layout directly as constraints in C# using the lightweight DSL technique that Bling is based on.

Cool. Bling is coming up on the list of links provided in this thread, so I will get to it soon. I skimmed the site earlier, but didn't find a document describing its design.

If you are really stuck on the web, you might want to consider a Javascript DSL approach (e.g., like Flapjax), but I really don't know what your goals are.

I have existing software written in C# for ASP.NET, and I find it's easier to work on projects if I can immediately apply the work to them, so at the very least the library should be usable to build interfaces that work over the web. I tend to think of the web as a remote UI for thin clients, and I think it should be possible to build an abstract interface for multiple UI substrates, ie. plain-old HTML with form submits, HTML+AJAX, and a remote WinForms/WPF UI.

The point I'm at now is something like this:

public interface IWindow {
  INode Title(string title);
}
public interface INode {
  void TextBox(string value);
  void Label(string value);
  ...
  void List<T>(IEnumerable<T> items, Func<T,INode> child);
  ...
}

It's a bit more complicated at the moment, but that's the gist of it. These interfaces can be used to describe the content of a UI dialog, but I wasn't sure how to specify the layout.

UI dialogs are essentially continuations that implement a display interface (the source at that link is out of date BTW):

interface K {
  void Show(IWindow w);
}

One of my big beefs with session-state is the lack of static typing, so this is one way to do it: any session variables are now object fields in your continuation object. It requires a bit more up-front work, since your program is now structured in CPS, but I haven't found that to be a burden at all, especially given the perfect clarity of precisely what state is needed for any dialog, and the strong typing benefits.

Dialogs can optionally implement methods used to send "damage" instead of requiring full screen updates, so this would permit AJAX-like functionality. The selection of full-screen refreshes vs. damage updates is made by the framework based on the client's capabilities (based on Content-Type for HTTP).

A WinForms implementation would certainly be much easier since I'm wouldn't be working with the web's limitations, but in keeping with the principle of applicable work, this approach would be more productive for me.

Cool. Bling is coming up on

Cool. Bling is coming up on the list of links provided in this thread, so I will get to it soon. I skimmed the site earlier, but didn't find a document describing its design.

There is no design document yet, I'm still working on that (as a conference style paper of course :) ). I would just browse the source and play with the example if you have time.

One of my big beefs with session-state is the lack of static typing, so this is one way to do it: any session variables are now object fields in your continuation object. It requires a bit more up-front work, since your program is now structured in CPS, but I haven't found that to be a burden at all, especially given the perfect clarity of precisely what state is needed for any dialog, and the strong typing benefits.

I had this problem with WPF (dependency properties are statically untyped) and so I added the static typing back in the DSL (since the static types were documented, just not used by the compiler) using C# generics. You could probably do something similar, though I'm not familiar with the continuation infrastructure you are using (I don't go near web apps).

I think that the obsession

I think that the obsession with textual languages is partly due to our general predisposition for serialized language which itself is mainly a result of our physiological limitations.

So why then should SW development greatly benefit from drawing lots of cartoons? I also tend to think programmers are rather descendants from Turing, von Neumann and Knuth and not from Michelangelo, Cézanne and Picasso. I have no idea why this shall be changed?

I doubt you can stereotype

I doubt you can stereotype programmers so easily. In my experience, there are both left and right brain programmers. Additionally, you left out Leonardo...

Its only a matter of time until someone comes out with a decent text + direct representation programming language. Hypercard already shows us that we don't need to break up our code into files, rather we could write them behind cards. I like the idea of being able to draw when convenient and write code when convenient.

Thank you

Thank you! But also I am trying to go further and suggest that much or all of the static textual code (which mainly relies on symbol order) would be more useful if instead it had the advantage of interaction and two dimensional representation. Maybe take the steps from single-color text (early screens only had one color) to syntax-colored text to intellisense to outlining capabilities (collapse/expand code blocks). See workflow editors for business processes or knowledge representation tools like Protege. These are all cases where the structures can and often are represented with traditional textual code but are more efficiently manipulated using graphical interactive tools. So I am saying also that just because we mainly have separate tools for specific types of programming problems doesn't mean that we can't create more general tools or create specialized ones for other types of problems. I think the key to accelerating the development of more general tools or many specialized tools is to provide sufficient abstraction capabilities. And just because it is easier to implement abstraction features that use textual languages doesn't mean that we can't do it for graphical interactive editors.

Susceptibility

I guess I meant predisposition along the lines of this definition/example: To provide an inclination or susceptibility: a genetic trait that predisposes to the development of cancer.

Tools

I prefer to have as many tools in my toolbox as possible - I wear many hats as an application developer. I like being able to use drag and drop in editing my forms. But I also want to look at those same forms in a textual format. Depending on what I'm doing and at what stage of the project I am in, the different perspectives they give can make me more efficient. I have a whole set of tools that is geared towards viewing and manipulating text. If you look at the GUI based tools, you'll find that they are just helpers to generate code.

As CTM astutely observes, for any software - especially forms - it is important to segregate those elements that are declarative from those elements that are imperative in nature. The GUI tools can help in building the declarative portion. But you still have to have the plumbing to get things done.

The common pitfall of forms based GUI programming is that the form can become the be-all, end-all form of abstraction. The software congregates around servicing each form. And the application gets to be jumping from form to form. This can lead to code that is way too form-centric.

Auckland Layout Model

While playing around with some ideas in my head I came across this which may interest you:

http://aucklandlayout.sourceforge.net/

which is almost exactly what I had in mind with slightly more awkward syntax :)

Basically the idea is to specify widget constraints and solve them using linear programming algorithms (meaning fast and incremental solutions!). Higher-level primitives such as "left of" etc. and removing the direct interface to the LP solver would make the syntax nicer, but the idea and the fact that it's implemented for mainstream languages is pretty awesome.

That's a really good link,

That's a really good link, and I think it's exactly what I'm looking for. Thanks!

Their latest 2008 paper is available here (it's not linked off the sourceforge page).

[Edit: oops, link fixed, thanks Jules! I swear sometimes FF copy-paste doesn't work properly...]

Ideas from CSS

You could take some ideas from proposed CSS extensions:

http://www.w3.org/TR/css3-layout/

And perhaps:

http://www.w3.org/TR/css3-grid/

Domain model as the language

A number of approaches reuse the domain model as the declarative language. This has advantages in that it removes the need to maintain duplicate definitions in both the UI language and the domain model.

  • May I humbly suggest my own implementation here
  • With comparison to others here
  • And a paper here: Kennard, R. & Steele, R. 'Application of Software Mining to Automatic User Interface Generation'. 7th International Conference on Software Methodologies, Tools and Techniques

Regards,

Richard.

In addtion to good ol' Tk, there's

Shoes (Ruby)
REBOL

No one mentioned Fudgets.

Fudgets

It is UI, it is declarative and it is a little bit more than declarative UI.

It is a way to structure your reactive program.