InterState: A Language and Environment for Expressing Interface Behavior

An interesting paper by Oney, Myers, and Brandt in this year's UIST. Abstract:

InterState is a new programming language and environment that addresses the challenges of writing and reusing user interface code. InterState represents interactive behaviors clearly and concisely using a combination of novel forms of state machines and constraints. It also introduces new language features that allow programmers to easily modularize and reuse behaviors. InterState uses a new visual notation that allows programmers to better understand and navigate their code. InterState also includes a live editor that immediately updates the running application in response to changes in the editor and vice versa to help programmers understand the state of their program. Finally, InterState can interface with code and widgets written in other languages, for example to create a user interface in InterState that communicates with a database. We evaluated the understandability of InterState’s programming primitives in a comparative laboratory study. We found that participants were twice as fast at understanding and modifying GUI components when they were implemented with InterState than when they were implemented in a conventional textual event-callback style. We evaluated InterState’s scalability with a series of benchmarks and example applications and found that it can scale to implement complex behaviors involving thousands of objects and constraints.

Workshop on Evaluation and Usability of Programming Languages and Tools (PLATEAU)

We are having another PLATEAU workshop at SPLASH 2014. We have a new category for "Hypotheses Papers" and thought this would be particularly appealing to the LTU community.

http://2014.splashcon.org/track/plateau2014

Programming languages exist to enable programmers to develop software effectively. But how efficiently programmers can write software depends on the usability of the languages and tools that they develop with. The aim of this workshop is to discuss methods, metrics and techniques for evaluating the usability of languages and language tools. The supposed benefits of such languages and tools cover a large space, including making programs easier to read, write, and maintain; allowing programmers to write more flexible and powerful programs; and restricting programs to make them more safe and secure.

PLATEAU gathers the intersection of researchers in the programming language, programming tool, and human-computer interaction communities to share their research and discuss the future of evaluation and usability of programming languages and tools.

Some particular areas of interest are:
- empirical studies of programming languages
- methodologies and philosophies behind language and tool evaluation
- software design metrics and their relations to the underlying language
- user studies of language features and software engineering tools
- visual techniques for understanding programming languages
- critical comparisons of programming paradigms
- tools to support evaluating programming languages
- psychology of programming

Submission Details

PLATEAU encourages submissions of three types of papers:

Research and position papers: We encourage papers that describe work-in-progress or recently completed work based on the themes and goals of the workshop or related topics, report on experiences gained, question accepted wisdom, raise challenging open problems, or propose speculative new approaches. We will accept two types of papers: research papers up to 8 pages in length; and position papers up to 2 pages in length.

Hypotheses papers: Hypotheses papers explicitly identify beliefs of the research community or software industry about how a programming language, programming language feature, or programming language tool affects programming practice. Hypotheses can be collected from mailing lists, blog posts, paper introductions, developer forums, or interviews. Papers should clearly document the source(s) of each hypothesis and discuss the importance, use, and relevance of the hypotheses on research or practice. Papers may also, but are not required to, review evidence for or against the hypotheses identified. Hypotheses papers can be up to 4 pages in length.

Papers will be published in the ACM Digital Library at the authors’ discretion.

Important Dates

Workshop paper submission due - 1 August, 2014
Notification to authors - 22 August, 2014
Early registration deadline - 19 September, 2014

Keynote

Josh Bloch, former Chief Java Architect at Google and Distinguished Engineer at Sun Microsystems.

Interactive scientific computing; of pythonic parts and goldilocks languages

Graydon Hoare has an excellent series of (two) blog posts about programming languages for interactive scientific computing.
technicalities: interactive scientific computing #1 of 2, pythonic parts
technicalities: interactive scientific computing #2 of 2, goldilocks languages

The scenario of these posts is to explain and constrast the difference between two scientific computing languages, Python and "SciPy/SymPy/NumPy, IPython, and Sage" on one side, and Julia on the other, as the result of two different design traditions, one (Python) following Ousterhout's Dichotomy of having a convenient scripting language on top of a fast system language, and the other rejecting it (in the tradition of Lisp/Dylan and ML), promoting a single general-purpose language.

I don't necessarily buy the whole argument, but the posts are a good read, and have some rather insightful comments about programming language use and design.

Quotes from the first post:

There is a further split in scientific computing worth noting, though I won't delve too deep into it here; I'll return to it in the second post on Julia. There is a division between "numerical" and "symbolic" scientific systems. The difference has to do with whether the tool is specialized to working with definite (numerical) data, or indefinite (symbolic) expressions, and it turns out that this split has given rise to quite radically different programming languages at the interaction layer of such tools, over the course of computing history. The symbolic systems typically (though not always) have much better-engineered languages. For reasons we'll get to in the next post.

[..]

I think these systems are a big deal because, at least in the category of tools that accept Ousterhout's Dichotomy, they seem to be about as good a set of hybrid systems as we've managed to get so far. The Python language is very human-friendly, the systems-level languages and libraries that it binds to are well enough supported to provide adequate speed for many tasks, the environments seem as rich as any interactive scientific computing systems to date, and (crucially) they're free, open source, universally available, easily shared and publication-friendly. So I'm enjoying them, and somewhat hopeful that they take over this space.

Quotes from the second:

the interesting history here is that in the process of implementing formal reasoning tools that manipulate symbolic expressions, researchers working on logical frameworks -- i.e. with background in mathematical logic -- have had a tendency to produce implementation languages along the way that are very ... "tidy". Tidy in a way that befits a mathematical logician: orthogonal, minimal, utterly clear and unambiguous, defined more in terms of mathematical logic than machine concepts. Much clearer than other languages at the time, and much more amenable to reasoning about. The original manual for the Logic Theory Machine and IPL (1956) makes it clear that the authors were deeply concerned that nothing sneak in to their implementation language that was some silly artifact of a machine; they wanted a language that they could hand-simulate the reasoning steps of, that they could be certain of the meaning of their programs. They were, after all, translating Russel and Whitehead into mechanical form!

[..]

In fact, the first couple generations of "web languages" were abysmally inefficient. Indirect-threaded bytecode interpreters were the fast case: many were just AST-walking interpreters. PHP initially implemented its loops by fseek() in the source code. It's a testament to the amount of effort that had to go into building the other parts of the web -- the protocols, security, naming, linking and information-organization aspects -- that the programming languages underlying it all could be pretty much anything, technology-wise, so long as they were sufficiently web-friendly.

Of course, performance always eventually returns to consideration -- computers are about speed, fundamentally -- and the more-dynamic nature of many of the web languages eventually meant (re)deployment of many of the old performance-enhancing tricks of the Lisp and Smalltalk family, in order to speed up later generations of the web languages: generational GC, JITs, runtime type analysis and specialization, and polymorphic inline caching in particular. None of these were "new" techniques; but it was new for industry to be willing to rely completely on languages that benefit, or even require, such techniques in the first place.

[..]

Julia, like Dylan and Lisp before it, is a Goldilocks language. Done by a bunch of Lisp hackers who seriously know what they're doing.

It is trying to span the entire spectrum of its target users' needs, from numerical inner loops to glue-language scripting to dynamic code generation and reflection. And it's doing a very credible job at it. Its designers have produced a language that seems to be a strict improvement on Dylan, which was itself excellent. Julia's multimethods are type-parametric. It ships with really good multi-language FFIs, green coroutines and integrated package management. Its codegen is LLVM-MCJIT, which is as good as it gets these days.

2014 APL Programming Competition is Open

The sixth annual International APL Problem Solving Competition is now live!

Dyalog Ltd invites students worldwide to put their programming and problem-solving skills to the test by using any APL system to develop solutions to ten questions and solve a series of problems. This is a contest for people who love a challenge and learning new things for fun, with the added bonus that you can win one of 43 cash prizes totalling $8,500, including a grand prize of $2,500 and a trip to Eastbourne in the U.K. to attend the annual Dyalog Ltd user meeting in September 2014.

For the rules and eligibility criteria and to enter the competition, go to http://www.dyalogaplcompetition.com/.

If you have friends who love a challenge and learning new things for fun, or you know students who might be interested in participating, then please recommend this contest to them.

The deadline for submitting solutions is 6 August 2014. Winners will be announced on 18 August 2014.

Good luck and have fun!

Apple Introduces Swift

Apple today announced a new programming language for their next version of Mac OS X and iOS called Swift.

The Language Guide has more details about the potpourri of language features.

Type soundness and race freedom for Mezzo

Type soundness and freedom for Mezzo,
Thibaut Balabonski, François Pottier, and Jonathan Protzenko,
2014

Full paper
Presentation slides

The programming language Mezzo is equipped with a rich type system that controls aliasing and access to mutable memory. We incorporate shared-memory concurrency into Mezzo and present a modular formalization of its core type system, in the form of a concurrent λ-calculus, which we extend with references and locks. We prove that well-typed programs do not go wrong and are data-race free. Our definitions and proofs are machine-checked.

The Mezzo programming language has been mentioned recently on LtU. The article above is however not so much about the practice of Mezzo or justification of its design choices (for this, see Programming with Permissions in Mezzo, François Pottier and Jonathan Protzenko, 2013), but a presentation of its soundness proof.

I think this paper is complementary to more practice-oriented ones, and remarkable for at least two reasons:

  • It is remarkably simple, for a complete soundness proof (and race-freeness) of a programming language with higher-order functions, mutable states, strong update, linear types, and dynamic borrowing. This is one more confirmation of the simplifying effect of mechanized theorem proving.
  • It is the first soundness proof of a programming language that is strongly inspired by Separation Logic. (Concurrent) Separation Logic has been a revolution in the field of programming logics, but it had until now not be part of a full-fledged language design, and this example is bound to be followed by many others. I expect the structure of this proof to be reused many times in the future.

Addressing Misconceptions About Code with Always-On Programming Visualizations

A CHI 2014 paper by Tom Lieber et al, abstract:

We present Theseus, an IDE extension that visualizes run-time behavior within a JavaScript code editor. By displaying real-time information about how code actually behaves during execution, Theseus proactively addresses misconceptions by drawing attention to similarities and differences between the programmer's idea of what code does and what it actually does. To understand how programmers would respond to this kind of an always-on visualization, we ran a lab study with graduate students, and interviewed 9 professional programmers who were asked to use Theseus in their day-to-day work. We found that users quickly adopted strategies that are unique to always-on, real-time visualizations, and used the additional information to guide their navigation through their code.

Edit: up to date paper linked in.

Fifty Years of BASIC, the Programming Language That Made Computers Personal

Fifty Years of BASIC, the Programming Language That Made Computers Personal

A very comprehensive history of BASIC from Time magazine.

Invented by John G. Kemeny and Thomas E. Kurtz of Dartmouth College in Hanover, New Hampshire, BASIC was first successfully used to run programs on the school’s General Electric computer system 50 years ago this week–at 4 a.m. on May 1, 1964, to be precise.

Edit: Dartmouth is celebrating Basic at 50.

How I Came to Write D

Walter Bright recounts how he came to write D

The path that led Walter Bright to write a language, now among the top 20 most used, began with curiosity — and an insult.

LtU now supports Mathjax

LtU now supports MathJax, which allows the use of TeX markup in posts and comments. Note that only TeX/LaTeX markup is currently supported - the alternate MathML format conflict's with the blog's HTML sanitization.

This enhancement is dedicated to neelk, who most recently suggested this feature just over a year ago.

When I went searching for a bit of Mathjax-compatible LaTeX that was relevant to programming languages, the first good example Google found for me was on Neel's blog:

$$
\newcommand{\val}[1]{\mathsf{val}\,{#1}}
\newcommand{\rule}[2]{\frac{\array{#1}}{#2}}
\newcommand{\judge}[3]{{#1} \vdash {#2} : {#3}}
\newcommand{\letv}[3]{\mathsf{let}\;{#1} = {#2}\;\mathsf{in}\;{#3}}
\begin{array}{ll}
\rule{\judge{\Gamma}{e}{A}}
{\judge{\Gamma}{\val{e}}{T(A)}}
&
\rule{\judge{\Gamma}{e}{T(A)} \qquad
\judge{\Gamma, x:A}{e'}{T(C)}}
{\judge{\Gamma}{\letv{\val{x}}{e}{e'}}{T(C)}}
\end{array}
$$

Copy-pasting the above and making it work immediately made it clear that we're going to need a package of useful macros. But it's a start. Feedback welcome.

Note that MathJax rendering is entirely client-side - if you don't see a well-formatted formula above, check the MathJax browser compatibility page and their FAQ.