Links Demos

Philip Wadler has a pair of Links demos up and running. One is a to-do list (source) that runs on the server but keeps state on the client via continuations; the other is an input validator (source) that is translated into Javascript to run on the client. A sample of the latter:

<input l:name="word1" type="text" value="{word}"/>
{[ if word == "" then <font/>
   else if isdigit(word) then <font color="blue">ok</font>
   else <font color="red">error: {[word]} is not a digit as a word!</font> ]}

(Previous Links discussion on LtU)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

1998 is calling

It wants its <font> tags back.

...Still on the line

... Actually, it wants this whole way of building web sites back!

Seriously, the Links project smacks of people who have never built serious dynamic web sites trying to tackle the problem. There are several flaws in their approach that come to mind:

Weaving HTML in and among the code is not a maintainable way to build sites. It is hard if not impossible to create and evolve visually well designed sites this way.

While using continuations for form actions does make connecting rendered forms with server actions easier, the implications of the approach are largely unworkable: Encoding large amounts of state into hidden form fields makes for large, non-cacheable pages with complete resend on every state transition.

Conversion from a single language into the polyglot environment of the web is interesting. However, looking at their conversion to JavaScript, it seems that it only works in practice because these small examples relied mostly on some hand-translated functions (i.e. member()), and one wonders if this could really scale. Again, including the JavaScript in the page increases size and ruins caching.

While I believe that language design can improve the state of web programming, I don't think Links approach helps. It seems to have started from the solution and worked backwards from there rather than proceed from the real issues.

Too early to tell...

[ Uh, for the emoticonally challenged: there is an understatement in the sentence below... ]

These are very preliminary results, and well, Wadler has a *cough* ok-ish track record. ;o)


Conversion from a single language into the polyglot environment of the web is interesting. [...]

Agreed and this is somewhat what I am wondering about too. Why -x -j switches? Is that how it is supposed to work, I thought he was going for some universal solution?

Btw, did someone try to reverse engineer the encoded continuations? Can I UUDecode or Base64 it back to link script?

Base64

Base64 decoding gives:


E2418+h2412+B15+h5+itemsc4+lh0+B1828+h4+todof1815+h5+itemsE3+h0+A1795+A27+A19+v6+h3+xmls7+h4+htmlO1+lQ1758+P11+s7+h4+

I will not inject fib 10000000 into Wadler's computer, I will not inject fib 10000000 into Wadler's computer, ... ;-)

Thank you!

Thank you for your restraint. We are aware that Links engenders serious security issues, but our (very preliminary!) prototype ignores these.

Hey!

Demi-gods are not supposed to read the jokes of mere mortals ;-)

So what are the real issues?

Thanks to all for the feedback!

..., I don't think Links approach helps. It seems to have started from the solution and worked backwards from there rather than proceed from the real issues.

I'm keen to get folk to tell us what they think "the real issues" are. (The call for participation in the Links workshop solicited just such feedback.) Please do tell!

What about ten years from now?

I am going to ignore current day technology problems for a while; because I can, and I assume those are always solved pragmatically within a year.

A quote by some B. Gates.

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction.

I take the last line as a sales pitch, but then again, I guess it is the task of scientists to dream and work on the long term. Sun and Microsoft seem to have their long term visions figured out pretty well, they both envision some 'ubiquitous' world (the late M. Weisner?) where heterogeneous machines constantly interact.

[ I think the success of C is partly since it works so well since it supports most machines (architecture) so well. The success of OO languages like Java and C# are probable because they fit (often desktop-like) complex software environments well. ]

So this is what I am wondering about. How is software written [and by whom, maybe Links should support Chinese source code], and deployed in a large heterogeneous (partly failing and hostile) ever changing and interacting machine world? [Java actually seems to be the best candidate at the moment] How is software tested/distributed? What about secure updates of components? How would a billion client/billion server application work? Do we need a secure code transfer protocol (SCTP)? What will be the basic commands in such a protocol? Who pays which bills ten years from now? What new sensors/actuators can you expect? Can programs steal computing power from nearby machines? ...

The list of stuff to think about is endless. For example, if in the old world I would get some slightly incomprehensible error message after a bug, that somewhat was ok by me since it was a lonely machine which couldn't connect to the outside world and my only option was to maybe call the local vendor or support desk.

In ten year from now, if I have a bug in an MS-application, I want B. Gates phone to start ringing.

[Well, maybe not exactly that, but at least I want an error report deposited onto the lap of the software engineer who made the inexcusable mistake of putting the insect there.]

Then again this is another quote by Gates

640K ought to be enough for anybody.

[ Hpmf, an he makes several hunderds of millions a year more than me? Annoying .... ;-) ]

[Hmpf, my other half tells me I should start cooking now, so that's my $0.02; success]

Hmm... playing with the todo

Hmm... playing with the todo list, Philip Wadler is shipping around large environments, largely repetitive, with each continuation. And with multiple continuations per page, my todo list doesn't work when I add a fourth item to it. Overall, the demo is rather underwhelming.

The JavaScript example has definite issues with Safari. JavaScript is quite painful to support across differing browsers though. What browser are other people using? (What browser is Philip using?)

Great ideas. I read the Links manifesto and felt that Wadler was thinking in the right direction, and seeing a demo is mighty encouraging. Implementation issues are quite understandable. Really, the modern browser is among the most painful programming interfaces conceived in the last thirty years.

man, this is awful

I know it's still in the very early stages, but it already strikes me as odd. Things i don't like already:

* Syntax is simply a mix of Javascript, HTML4.01 and fun keyword for function definition that is no fun;

* I thought the goal was to use one language only to describe your whole application. But it's still html and some script cobbled together!;

* Today, tools extract only the relevant pieces from an XML or HTML document in order to fill them with data. This way, it keeps the Model clearly separated from the View, which is good, isn't it? This one looks just like the PHP way of doing things...

i hope they're just toying with ideas to get feedback and make it one better...

More feedback, please!

Syntax is simply a mix of Javascript, HTML4.01 and fun keyword for function definition that is no fun;

What would you prefer? (Everyone has strong, and different, opinions about syntax. I'm wondering about using some technique similar to the bug parade for Java to let people vote on syntax ...)

I thought the goal was to use one language only to describe your whole application. But it's still html and some script cobbled together!

Ultimately, you need to generate a web page in HTML. We could wrap around our own library to do this (as Oz does); but I don't see how this would help web developers. Most of them know HTML anyway, what is gained by replacing familar HTML with something different? (NB, this isn't a rhetorical question; if you think there is a gain, please explain what it is.)

Today, tools extract only the relevant pieces from an XML or HTML document in order to fill them with data. This way, it keeps the Model clearly separated from the View, which is good, isn't it? This one looks just like the PHP way of doing things...

MAWL and Bigwig both do a good job of separating web page from the code that generates it. (Schwartbach and Moller have a lovely paper on how to do this.) I think Links already supports these separation techniques, but I didn't think using them would contribute to the clarity of a tiny example. I'll try to put together something to demonstrate that HTML and logic can be kept in separate files and maintained separately. Or, better yet, do you have a favorite small-ish application, where such separation is a clear benefit? We're always looking for good problems to code up in Links.

i hope they're just toying with ideas to get feedback and make it one better...

Our tiny examples are, indeed, toys. I'm delighted that they've already generated feedback!

HTML is not expressive enough

Ultimately, you need to generate a web page in HTML. We could wrap around our own library to do this (as Oz does); but I don't see how this would help web developers. Most of them know HTML anyway, what is gained by replacing familar HTML with something different? (NB, this isn't a rhetorical question; if you think there is a gain, please explain what it is.)

A lot can be gained by using a slightly more expressive language than HTML. The main problem with HTML is that it is not extensible. To see why this is a problem consider conventional documents. In this setting the equivalent of HTML would be latex without user-defined macros or MS Word without styles; it should be clear to anyone familar with these systems why not being able to extend them with such abstractions would be crippling. Similarly, in order to effectively build and maintain any non-trivial website, some kind of abstraction over HTML is essential - even if the site is entirely static. The abstraction mechanism need not be complex: templates (as in JWig) or macros (as in latex) are sufficient. Lets assume we have a simple macro language (I claim that macros and templates are essentially equivalent). Anyone working on the structure of a site should work with the macro language rather than directly with raw HTML.

(An alternative approach for abstracting over HTML would be to use XML, XML Schema and XSLT, but this seems a bit clunky to me, and I don't think it would integrate well with Links.)

One can allow raw HTML syntax to be mixed with the macro language, but there are benefits to insisting that web developers only use macros:

  • It is easy to change the output to use a different version of HTML, a different version of XML, or even a different language altogether; just by changing a few low-level macro definitions.
  • HTML syntax mixed with some other language is ugly, hard to read, and non-trivial to parse.

Whilst working for Red Snapper, a small web agency, I helped to develop a CMS which uses such a macro language. They have been using it successfully to build and manage medium to large websites since 1996. The web developers who work there love it despite the fact that they had to learn a new language (albeit a very simple one).

I suggest that Links should use functions to abstract over HTML/XML and that it is unnecessary (perhaps even undesirable) to include support for embedded HTML/XML. Anyone working on the structure of the website would have to learn some subset of Links, but this need not be any more complex than the simple macro language mentioned above (any HTML programmer should be able to pick it up easily).

Links

I think Links is a breath of fresh air in PL research. Admittedly, it's 10 years behind cutting-edge Web technology (e.g., Ruby on Rails, AJAX) but it's a step in the right direction.

It seems that PL research, as it stands, largely ignores issues that are very hot in the mainstream (e.g., piles of papers on Haskell and fancy type systems, while Python, Perl, and Ruby are wreaking havoc on the internet, unchecked, and without help) but Links goes against that trend. I think the research community at large needs to realize that what's happening now _is_ relevant. The whole "what we do will prolly be of interest to the masses in 10-20 years" justification for things is bogus, especially since it's had a crappy track record.

Keep it up.