Cambridge Course on "Usability of Programming Languages"

From the syllabus of the Cambridge course on Usability of Programming Languages

Compiler construction is one of the basic skills of all computer scientists, and thousands of new programming, scripting and customisation languages are created every year. Yet very few of these succeed in the market, or are well regarded by their users. This course addresses the research questions underlying the success of new programmable tools. A programming language is essentially a means of communicating between humans and computers. Traditional computer science research has studied the machine end of the communications link at great length, but there is a shortage of knowledge and research methods for understanding the human end of the link. This course provides practical research skills necessary to make advances in this essential field. The skills acquired will also be valuable for students intending to pursue research in advanced HCI, or designing evaluation studies as a part of their MPhil research project.

Is this kind of HCI based research going to lead to better languages? Or more regurgitations of languages people are already comfortable with?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


A programming language is essentially a means of communicating between humans and computers.
In practice, this is completely false. A programming language is a means of communicating between humans and humans. Once one understands this, many facts about reality (that many of us on this forum probably find trivial) begin to make sense. For example, the vast majority of flame wars in the general public concerning programming languages has to do with issues such as white space, parentheses, curly braces, begin/end blocks, type and function and local variable declaration syntax. People have innate preferences on these matters, or preferences based purely on whatever they were exposed to first. It would be interesting to study the nature of these preferences (as well as more substantial preferences involving the nature of types, defaults, higher level control constructs, modules), the way it is interesting to study the nature of human preferences in tastes in art, music, beer, cheese, and politics. But I don't know if anyone is studying Human-Human Interaction at this broad level in the context of the computing world. If anyone has any pointers to existing research in this vein, I'd love to see them.

Human to Human

I agree that human communication is an important aspect of PL usability. Most code we use comes from other people. And we are often strangers to our own code after several months.

That said, what distinguishes PL from natural language is suitability for communication between humans and computers.

But regarding the human communication aspects, a couple resources I've found useful are:

Not false

What makes computer languages different from English or Esperanto? If it the subject was only about communicating between humans, what would we need new special-purpose languages for? At most, some specific subset of mathematical notation would be sufficient. And flame wars over white space or comment styles would be irrelevant (even more so than they are now).

To the extent programming languages are about communicating to other people, they are about communicating to other people what we are trying to communicate to the computer. The "true meaning" of a program is what is communicated to the computer, i.e. what the computer does when the program is run.

different from natural languages - but for humans

What makes computer languages different from English or Esperanto?

I think the difference is mainly in the message - the content used for communication. Gregory Chaitin said that a program is a frozen thought (in Meta Math: the quest for Omega). Extending it a bit, programming languages allow humans to communicate frozen thoughts with other humans. It is the job of the compiler to inform the computer about what the humans are thinking about.

A program is a frozen thought

So is an equation. So is a sonnet.

The interesting question is what thought is being frozen/expressed? The thought a program expresses is what we want the computer to do. In this sense, programming languages must serve two masters- human to human communication is one, but human to computer is the second. Outside of explicitly comedic situations such as the IOCCC, we want the two different expressions to be identical- that both the human and computer reader need to perceive the same frozen thought, as it were. What we think we're telling the computer to do and what the computer thinks it's being told to do need to be congruent.

This need to serve two masters touches every single aspect of programming languages. Starting with severely limiting the domain of discourse. "My love is like a red, red rose" may be deeply meaningful to a lover, not so much to a computer. I note that programming languages tend to be very bad for writing sonnets (worse than even German :-) ).

It's especially illuminating, I think, to compare programming languages to their nearest human to human only equivalent: mathematical notation. There is a lot of similarity between programming languages and mathematical notation- a need for rigor, ease of expressing algorithms, etc. But the differences are even more interesting. For instance, mathematical notation lacks the concept of I/O, features a very loose type system, has a very poorly defined cost model, etc. All of these are based on the recipient of the communication being able to "just figure it out", because the recipient is always a human being.

I'm not trying to down play the importance of the human to human aspect of programming languages. But to declare that the human element is the only factor that needs consideration in programming languages is just false.

The nearest human relative

The nearest human relative of PL is not mathematical notation. Rather I would go with legalese (the language of law and contracts), recipes on how to cook something, books of learning, instruction manuals, and so on. There might be some math notation to guide people (add 3 cups of sugar after stirring), but no person can be instructed by mathematical notation alone!

The fact that many think mathematical notation is the best analogy for a PL has done lots of damage over the history of our field. If we realized programming was more about directing and managing resources, we could be in better shape.

If you think instructing a human being to do something is easier than instructing a computer to do something, I would respectfully disagree. Humans can interpret intent and fill in knowledge gaps, but they are also more likely to screw things up and blatantly ignore necessary instructions for whatever reason.

mathematical notation is not the best analogy?

The fact that many think mathematical notation is the best analogy for a PL has done lots of damage over the history of our field. If we realized programming was more about directing and managing resources, we could be in better shape.

Could you kindly explain it a little more? What do you mean by "resources" here - the RAM and such?

Human resources, data, user

Human resources, data, user interface, and so on...I wasn't referring to low level exhaustible resources like RAM, disk, CPU.

The fact that many think

The fact that many think mathematical notation is the best analogy for a PL has done lots of damage over the history of our field. If we realized programming was more about directing and managing resources, we could be in better shape.

I wouldn't say it's "more" about resources, I'd say it's "also" about resources. I also think it's a mistake to consider resource management as lacking mathematical structure, though requiring a rigourous mathematical structure from the get-go can sometimes impede exploration of the problem space. Any solution that is found will have important structure, and formalizing it can simplify it even further.

Yes, there is some

Yes, there is some mathematical rigor to anything we do. But explaining a process as a pure mathematical equation? If I am a construction worker working off some architect's plans, math is useful, reusing math is even more useful, but most of what I'm doing is following and interpreting a plan. We had PL-like languages before computers, and they didn't resemble equations.

I'm having a hard time

I'm having a hard time imagining how this would impact language design. Can you give an example of how managing resources based design would influence language design vs mathematics based design?

A program instructs a

A program instructs a computer what to do, to accomplish some task, to implement some process. This is why imperative programming remains popular, not b/c people don't get functional programming, its just most of what they do with a computer does not benefit very much from function abstractions (better programmers will use FP when appropriate, but it doesn't dominate). The ability to instruct definitely impacts language success, and as a side effect should impact language design.

Process just doesn't reduce to math; i.e., it is quite ridiculous to ponder what the mathematical formula is for Kung Pao Chicken.

Math is just precise, or

Math is just precise, or "more precise than usual" thinking. That applies to processes just as it applies to functions. The difference between processes and functions is superficial. A process is just a function from one state to the next. But perhaps you mean something else by process?

I agree though that not every problem can be feasibly (let alone profitably) reduced to math. Sometimes other kinds of reasoning works better or is the only thing you have.

If I take your view, the

If I take your view, the whole history of human discourse is somehow mathematical; after all the bible is merely a bunch of state changes. So?

That I can reason about state without formalisms makes me think they aren't the same, and any pure/logical encoding seems overly complex and unnesecary. Hence we still use imperative languages because they save us from understanding category theory to do IO. For understanding IO at a deeper level, the formalisms are great, but most of us don't need that.

Sometimes I think the community focuses on formalisms because it is just easier than the harder problem of usability. Dumping some elegant theory into a paper is a good way to get published, the alternative is measuring something empirically for the impl crowd. So we put our effort their and lots of real pressing problems don't get touched. That this thread has shifted from PL/HCI to the virtues of math and FP is evidence of this phenomena.

I don't understand how you

I don't understand how you jump to the conclusion that the whole history of human discourse is mathematical, and neither do I understand what you mean by that.

You can reason about state with or without formalisms (e.g. Hoare triples). You can reason about functions with or without formalisms. This is not a property of functions vs processes; it's just formal vs informal reasoning and it applies to both functions and processes.

I do very much agree with your last paragraph.

Sometimes I think the

Sometimes I think the community focuses on formalisms because it is just easier than the harder problem of usability. Dumping some elegant theory into a paper is a good way to get published, the alternative is measuring something empirically for the impl crowd.

If you can't build a formal model of a system, do you really understand it?

I'd say no, because a formal model will provide depth and breadth that informal understanding can't easily replicate. Empirical research to find correlations is definitely important to guide us to what needs formalization though, just as it does in other sciences, eg. correlations between semantics and bug count, or semantics and maintainability, etc.

I completely agree with this

I completely agree with this with two caveats. First, the formal model is not always obtainable and the abstracted formal model is not often useful. Second, you can't really jump from ignorance to complete understanding in one leap. There is a lot of time to be spent in the area where you have no idea how to to construct the formal model yet, do you just freeze up until you are somehow enlightened or do you keep plugging away with the hope that enlightenment will come to you as you understand more about the problem and solution space?

I had this experience with Jiazzi, I was able to come up with some beautiful formalisms that really described the tension between inheritance and mixin modularity. However, there was a lot of work between then and there where the focus wasn't on formalisms.

do you just freeze up until

do you just freeze up until you are somehow enlightened or do you keep plugging away with the hope that enlightenment will come to you as you understand more about the problem and solution space?

I mostly agree with your caveats, but in the above it sounds like the implicit assumption is that approaching a problem with formalization in mind requires "freezing up" on every problem you don't immediately know how to solve. I would say instead that exploratory phases are just as applicable in math and other formalisms as they are to programming. I've also found that solving the formalism problem tends to solve most of the problems at the programming level, which I imagine is the principle Coq uses to extract programs from its proofs.

The problem is that programs written in the more rigourous formalisms (Agda, Coq, etc.) tend to be too dense for mere mortals to parse. There is definitely interesting HCI work needed to figure out what sorts of formalisms are truly helpful to programmers, and which ones hinder us. The hindrances can perhaps be offloaded to automated tools, like search or constraint techniques. Today's languages and programming environments will look like BASIC and Cobol to programmers 40 years from now.

The problem is that

The problem is that programs written in the more rigourous formalisms (Agda, Coq, etc.) tend to be too dense for mere mortals to parse. There is definitely interesting HCI work needed to figure out what sorts of formalisms are truly helpful to programmers, and which ones hinder us.

The problem with Coq, as I understand it, is that it verifies your assumptions but doesn't help you enough to form them, you still have to do too much of the proof process in your head. Its all about navigation really, can a PL help programmers find the solution they need? That we now call them "proof assistants" rather than "proof verifiers" is a great step in the right direction, but I guess more needs to be done. Likewise, compilers used to be just for generating machine code and saying yes or no to type errors, then they actually listed the errors out, then they actually listed the errors out in a comprehensible way (barring C++ of course), and then they now actually provide you with Intellisense/autocomplete. Hopefully this trend continues.

In my current project, I battle against density everyday. Realizing 10 choices are better than a something you don't really learn in PL.

Today's languages and programming environments will look like BASIC and Cobol to programmers 40 years from now.

One would hope we could accelerate that to 10 or 20 years.

One would hope we could

One would hope we could accelerate that to 10 or 20 years.

I'm hopeful, but I sincerely doubt such a short timeline for the kind of upheaval to programming that we're discussing. Windows XP came out 10 years ago and it still has something like 32% market share. Innovation is rapid, but adoption is slow. I think C# and Java will still be around in 10 years, albeit with tooling like the CodeContracts unit-test generator becoming more common.

Adoption is different from

Adoption is different from actually creating these languages. I haven't seen any new languages yet that are radically different from the past in the direction of usability. Let's start small: a usable language with some users, it doesn't even need to be very mainstream, it could have a niche user base on the scale of Haskell or Clojure.

Adoption is different from

Adoption is different from actually creating these languages.

I was implicitly assuming that we won't know if such a language is actually good "in the large" until it really gets used, which means adoption is critical. We can have all sorts of reasons for believing something is good, including empirical studies, but only contact with reality can really ferret out invalid hidden assumptions that may have been made.

I haven't even seen such a

I haven't even seen such a language that I believe is better in usability whether it has been validated empirically or via adoption. The first step is to actually design/prototype these languages.

I don't think the community

I don't think the community focuses on formalism because it is "easier" than usability; it is more likely the purity and challenge of formalism is its own draw. Some frankly don't care about making tools useful to people but would rather discover some theorem in some obscure branch of logic that only their friends care about. For some, PL theory is just a branch of mathematics as useful to programming as leptons are to farming, the way non-Euclidean geometry was interesting to mathematicians in the 18th century, long before it was accidentally discovered to be useful. In my darker moments I wonder if some CS theorists would have been happier to have lived back then.

I agree with that. But then

I agree with that. But then I would also say for those people that dealing with formalisms is not just more interesting, but also easier in the sense that this is what they are familiar and comfortable with. Regardless, my point is that we need a more diverse community to push things forward.

Agreed. One problem is that

Agreed. One problem is that often PCs of PL conferences contain such theory persons and they sometimes torpedo papers that lack "rigor" yet may have interesting ideas.

Definitely, hence the

Definitely, hence the importance of conferences like Onward. Papers often obtain rigor by being painfully incremental, the selection process itself discourages work that is innovative but not yet ready for rigor.

Ironically, HCI conferences have the same problem with design papers. HCI as a field is very scientific and built around empirical measurably results. Design, on the other hand, focuses on solving wicked problems in poorly understood areas where measurements are not very useful yet. I'm wondering if PL/HCI would have the same problem.

Perhaps publishing is not the best way to present innovative ideas. Science traditionally involves some form of rigor, and if you take that out, you might not be doing science anymore. Instead, a "formal" exhibition and critique would be more appropriate, think strangeloop and crazy PL fests. Then the point is how do we justify our funding?

Academia is one model in

Academia is one model in which invention can happen. Research labs in big companies often have smaller pressure to publish, although they another pressure: to do something that will eventually improve the company's bottom line. Yet another model is starting a company that sells a product based on new ideas (e.g. a more usable programming language). Historically programming language companies have not done very well, unfortunately.

IBM has pretty much left

IBM has pretty much left Eclipse to its community, Oracle is not really a serious contender today (in so much as Sun wasn't), so the only big companies doing languages and tools are...Microsoft and Google (disclaimer, I work for the former). Of those two, Google reuses a lot of community work, while Microsoft is famous for going it alone on its whole stack.

We are seeing something of a renaissance starting in small dev tools company, consider Cloud9 and JetBrains. I'm especially impressed with Cloud 9's emphasis on design in their IDE, they are definitely a company to keep an eye on.

Its completely possible that someone on their own will just come up with a language and it will be adopted (like Python, Ruby, Perl, ...). I wouldn't discount the independent approach either.

By ‘doing’ you probably

By ‘doing’ you probably mean producing the software – compilers, IDEs etc. But as regards design, there is not a single p.l. from Microsoft that I would consider really original. The distinction is important, as the preceding discussion has to do with innovation and invention rather than production.

Do you know how many PLs

Do you know how many PLs come out of Microsoft and Microsoft Research? Many of them you probably never heard of before (the same is true with Google and IBM). I grant your point though, big companies are going to be focused more on production rather than cutting edge innovation, in fact...its hard to make real money to satisfy shareholders with PL innovation (consider Apple in the 90s). On the other hand, I believe its definitely necessary for our industry and will have to come from somewhere.

Academia is one model in which invention can happen

Duh. If I look at how academia and colleges are progressing towards the future in Europe/Netherlands then academia has been cut down to pencil and paper proofs, preferably cata-gory theory -which just rephrases what we already know,- and colleges are doing flash these days. I.e., all possible routes to invention have been cut. It's up to US and Chinese industry now.

Ivory tower, blah blah blah

I find this kind of imprecise, bitter negative comments irritating. "Academia is only doing symbol pushing with no relevance or creativity whatsoever" is so cliché my brain shuts down when I read this.

If you have a specific complain that could spawn a constructive discussion, please do it. We are all fond of thought-provoking controversial statements. But if you're only writing this out of frustration, please, find a different way to vent.

I'll try to dilute the negativity of my own message by answering a different message from the same thread.

But as regards design, there is not a single p.l. from Microsoft that I would consider really original.

I don't know about the other parts, but Microsoft Research is certainly doing important work in some areas. I don't know what you consider "really original", and it is probably too soon to tell what will be ground-breaking, but there are lot of interesting things going on there¹, with top researchers. Granted, you could say that Microsoft Research is academia, but that's partly the point.

¹: I began doing a list, but it turned to be mostly a listing of MSR activities in the domain I know, that is statically typed languages. Just look at their research pages and publications.

Graph theory. Discrete math.

Graph theory. Discrete math. Cryptography. Complexity analysis. Compiler construction. Automata theory. Microprocessor design. Automated testing tools. Network analysis. AI, neural networks, classification algorithms. And then all the multidisciplinary crossovers one can do.

And instead everybody and her dog is moving towards doing category theory/fundamental type theory? Abstraction is nice, but one can go too far. Studying the meaning of life will not magically fill your bank account.

Call me old-fashioned, but if you want to kill innovation, then moving consistently into more abstract regions is a good manner of doing so.

Then again, everything follows pig cycles and the trend may already have been reversed.

I frankly doubt "everybody

I frankly doubt "everybody and her dog" are moving towards category theory. Some people are, because they are interested in it and think they can make a contribution in this area (which has occasionally been useful for more practical purposes, see Moggi's monads, "An idealized MetaML", or, for a more extreme example, linear logic), but there are also a lot of examples of people moving from such theoretical stuff to more concrete matters, motivated by personal interested or possibilities of funding. See Erik Meijer, or maybe Benjamin Pierce.

Do you have some examples of people moving from practical fields to more abstract theories? In the typing community, I can think of Lars Birkedal and maybe Nick Benton, they began in the tradition of typed languages and moved to more abstract, partially categorical fields. But this move is clearly motivated by the need to find better instrument to reason about increasingly sophisticated language features, and I certainly don't think this move into abstraction is of no practical interest.

I'm under the impression that the appeal for theory has always been strong in the programming languages community (domain theory, "Categorical Abstract Machine" aha). It's important that people don't get isolated in that mindset, and Sean MicDirmid is of course right to call for more diverse approaches to programming language reasoning and design, but I don't see this¹ as hurting the field. Moreover, besides the interest of theory for the field itself, the crossings with mathematicians and logicians has certainly brought excellent results, such as Coq and Agda that are helping us deal with increasingly subtle/difficult questions.

¹: people attacking a field with their own tools and ideas, according to their center of interest instead of what you/someone thinks is most useful at a given time, as long as they follow a reasonable scientific methodology.

Bad Hair Day

Nah. I'll admit to having had a bad hair day. As for the evidence: What is the difference between a lot of paper published in the seventies/eighties and now? More and more abstraction, whereas they used to do 'stuff.'

And I think there's an end to working more abstractly in CS. Seriously, what's the difference between reformulating things more abstract in math and moving on towards 'let's start thinking about math, and then life, in general,' in the hope that we'll build better tools?

People can do that, but preferably a small minority of CS people.

(As I said, I'm not sure

(As I said, I'm not sure there was less theoretical stuff in "a lot of paper published in the seventies/eighties" than there is now.)

I'm probably speculating just as much as you are, but when I'm asking myself "why would a programming language researcher start to publish papers about monoidal traced categories", the most natural answer does *not* seem to be "to think out loud about life in general".

Category and other math-heavy theoretical stuff does not come out easily: you have to work to understand the concepts, how to use them, and how to relate them to what you already know. I doubt people would suddenly start studying those topics -- on which mathematicians already have a good head-start, so it's difficult to compete on the topic in itself independently of practical applications -- unless they have good motivations to do so. The most natural motivation, I suppose, is to solve a hard problem you're having.

In programming language design, they have been lots of hard, open questions. Some of them are explicitly formulated ("How do we prove that two programs are equivalent?", "How do we prove that this type system is sound?", "Can we infer types instead of asking the user to annotate them?", "Are those two formulations of related things actually the same?"), and some of them are more vague ("Why the hell is this thing so difficult/complex/baroque? It all started out of simple ideas!"). People are trying to find tools to crack them; sometimes they reuse existing theoretical stuff, sometimes they create new things (and maybe relate them later to theoretical stuff), sometimes they find out (usually later) that simpler solutions also work.

To take a concrete example, reasoning about local state, in particular in presence of higher-order functions, is extremely hard. You have a function that uses some mutable state for optimization purpose, but you're confident this function is observationally pure: how do you show it's equivalent to the corresponding less efficient pure program? This is a simple question of practical interest, which has prompted important theoretical developments, including category-theoretic stuff, metric spaces, bisimulations, game semantics, etc. etc. We still don't have a complete story, but we are getting close. When we will have a good understanding the matter, maybe (hopefully) we will find explanations in simple terms, with simpler syntactic appraoches, and we will be able to tone down on the theoretical aspects -- much as the Haskell people are trying to do with monads. But that certainly doesn't mean that the theoretical tools reused or developed were not valuable.

Man, whatever...

As I said before, the monad is the best example of doing category theory wrong. The type of it is, by heart, a -> M a x M a -> (a -> M b) -> M b, and monadic use relies solely on the fact that one constructs values (in a type) which can never be deconstructed again. The fact that monadic IO works therefor has nothing to do with category theory, but with the fact that there is a function hidden somewhere which can run a value constructed in that type. It is a magician's trick: people are looking at the rabbit instead of the hat and the hand, and it is explained as a rabbit by people who want to score cheaply.

Man, we both know what the problem is: I am too honest to be around scientists and you are not.

Whatever, talk to the hand.

Talk about bad hair days...

Wow. I'm pretty sure "talk to the hand" is a new low-water mark for the level of discourse on LtU. On the other hand, I lolled.

So many bad hair days...

That I think I am going to unsubscribe. Nice that you laughed about it, though.


The fact that many think mathematical notation is the best analogy for a PL has done lots of damage over the history of our field.

Although there might be some truth to that, the inverse is much more obvious: the fact that many think math and mathematical rigor is irrelevant to programming has done incredible amounts of damage to the craft.

It goes both ways right?

It goes both ways right? Both extremes are untenable, and leads to the huge communication gap that exists between research and practice.

Theory and practice

"Theory is when you know something, but it doesn't work. Practice is when something works, but you don't know why. Programmers combine theory and practice: Nothing works and they don't know why."

- Unknown

much appreciated, but

so is a sonnet.

Sonnets , as they go, are not quite "frozen".

I apologize for bringing anecdotal evidence, but I have it from a few poets/artists that when you are writing a poem or painting a picture, you are as much in search of knowing what your subject means (to you). Until the line has been penned down, the poet did not quite know that his love was like a red red love or if it was only like a monadic burrito. Nor does he have an idea as to why it is like a red rose. One can hardly say that it was a frozen idea.

However, if you already know that it is like a red-red rose, and what a red rose is you can definitely "express" in java

class MyLove implements RedRedRose{
// whatever is the implementation

When you are writing a program, you already know what you intend to do. The idea is dead.

An essay or a poem or even an "artistic" post like yours, your thoughts are alive. If I may hazard a guess, I believe that your own thoughts about the subject got 'refined' as you were writing it. (For instance, perhaps, that "worse than even German" joke wasn't part of your thought process when you started writing your post ).

When I write a program, I

When I write a program, I really don't know what I intend to do. I write something, see how it looks, see how it runs, change and rewrite it, to discover something. At least in my case, maybe there are some people who see coding as simple dictation as they've already figured everything out. I envy those people in some ways, but it would seem to be a rather dull experience.

Social Models for PL

Your comment reminds me of a discussion from a while back, where I argue that social models are more relevant for PL than cognitive models.

I still hold that opinion. (Though, I also think languages have vast room for improvement in both dimensions.)


epic thread, man.

Addressing the question

Is this kind of HCI based research going to lead to better languages? Or more regurgitations of languages people are already comfortable with?

I think there has not traditionally been a large (or even medium-sized) community of PL+HCI research, as noted by the organizers of the PLATEAU workshop at Onward/Splash, among others. Sure, it's hard to say for sure what (if any) progress this will lead to, but there are generally some rewards for exploring research gaps between two large fields.

As an undergrad hoping to start graduate work in precisely this field next year, I can say that it seems like it would be performing due diligence to avoid simple regurgitation. I don't think that the fact that the research is coming from an HCI angle makes it any more likely to be some marginal change that nobody cares about. As the syllabus quote that you selected implies, many ignored languages come from the PL community.

The OP has the wrong

The OP has the wrong impression, I think. HCI doesn't in itself lead to better language, it can only uncover those principles that are good in existing languages using empirical analysis. In other words, HCI is not design! In many ways, PL/HCI is more science then the implementation and theoretical aspects of PL (which are more about engineering and math). Be prepared to spend lots of time doing interviews and experiments, and not much time doing your own novel designs, which would bias you against being very objective in performing those experiments.

That is, the ideal PL/HCI researcher would look at Haskell or Scala and not have any emotional attachment to it, so they could be very objective about its usability properties.

All of the languages that have come out of PL/HCI so far (think Hands and Kodu), are very simple experiences and aimed at beginners and non-programmers... Obviously usability is very important to these groups, but I have another theory why they haven't yet created more advanced programming experiences: complexity doesn't user test very well. You take a new language with some new feature X that isn't very stripped down and simplified, and 99 out of a 100 people will think its too difficult to ever be useful. I'm very skeptical that, with the current approaches to PL/HCI, that they can escape this problem.

Forgetting the past

All of the languages that have come out of PL/HCI so far (think Hands and Kodu)

Small anecdote: when Kodu was demoed to us, I asked one of the leads (IIRC) whether Dijkstra's Guarded Command Language was any inspiration for them. Answer: "No... Who is Dijkstra?"...

Actually, they have some

Actually, they have some great ideas even if their influences are indirect. They were inspired most from Brooks' subsumption architecture via a book on robotics programming. This worked out very well for Kodu and I've based my own work on similar ideas (via them ), though I'm aiming for a much higher ceiling. We could argue all night if Brooks is better or worse than Dijkstra, they are obviously very different and meant for different things (i.e., do not try to use GCL to program robots!).

The problem

I agree with you in general, but I admit that I'm also poorly informed. Has the type of empirical user-driven study so beloved of HCI ever led to a substantial innovation in tools for experts, in any field? For instance, any type of software, musical instruments, manual tools, etc. I am really very interested in specific, concrete examples of innovations which have given asymptotic improvements in some dimension of the expert's work, whether it's productivity, correctness, or whatever. In other words, I am interested in improvements that remain even after a substantial learning period is allowed (say, 1000 hours at a minimum).

Still interesting to me, but less so, are examples that successfully leverage the investment of existing experts (so, say, several thousand hours at a minimum) in existing tools, rather than improving the experience of the rank beginner. But again, I am interested in concrete examples of success.

Of course it's fine if the answer is "no," but I would prefer them in that case to be more up-front about their goals...

1. Here's a recent survey

1. Here's a recent survey for software engineering: . Unfortunately, the metric is citation impact -- it's vague, e.g., cross-domain citations might suggest practical impact more than same-domain ones. Anyways, the essential results seems that a user study isn't a ticket to impact, but if a paper does have high impact, a user study is probably part of it.

2. I've been very interested a technique based on understanding user communities: interventions based on fitting a social group to a diffusion of innovation model and then appropriately targeting individuals (e.g., community leaders) and processes (e.g., awareness, trial periods, decisions, etc.). This has been used for significant results in preventing HIV, getting people to drink clean water, etc., though I haven't seen it phrased as a mechanized tool.

3. You just portrayed a common perspective on user studies, but for many types of qualitative analysis, that misses the point: . The OOPSLA paper seems to support this.


Thanks for the links. I don't really see in either of these papers the type of concrete, specific examples of impact on the work of experts that I was hoping for. As silly examples, I have found that Unix is much better for the serious user than Windows, that xmonad and awesome are better window managers than, say, Ubuntu's new Unity or Mac OS, and that in the progression from MS-DOS editor to vim to Intellij IDEA, I became a much more productive programmer. In not one of these cases do I believe we've even approached the "perfect" solution. This is the type of usability that I'm most interested in.

But I guess this mostly seems like a case of being more up-front about goals, as I originally suggested. I frankly admit that my only real interest in HCI is in "implications for design," as Dourish puts it, and specifically implications for the design of power tools. If that is a poor metric for evaluating user studies, fair enough, but since that's my interest, I guess I will just end up looking elsewhere.

Also, I do think there's something perversely amusing about referring to "best paper awards and citation counts" as "external measures of impact." Perhaps that's a term of art that I don't understand, but you'll forgive me for hoping for slightly more external measures of impact than that.

Incidentally, I do realize that I "portrayed a common perspective," which was partly why I was willing to portray it so strongly, even perhaps somewhat more strongly than I feel. I think it's a necessary viewpoint for HCI to address. And while it's a common perspective, I remain not yet fully convinced that it's misguided...

Part of an expert's training

Part of an expert's training is a deep learning of their tools which are necessarily fairly complicated (think AutoCAD, Illustrator, programming and IDEs are actually simple comparatively). Accordingly, anything with a large learning curve is very difficult to evaluate empirically given that the way we do experiments depends on lots of users, control groups, and time...If you can't measure it empirically, you are stuck with expert walk-throughs that are not very objective. Its not that HCI isn't interested in these hard problems, its that they lack the tools to deal with them and so pursue things they can measure instead (end-user performance).

Advanced testing

There has been some rather good advanced testing, but it has been ad-hoc. "Write a correct optimizing pattern match compiler" obviously a rather advanced issue

For example create a symbolic expression compressor with the following rules
The problem is to simplify symbolic expressions by applying the following
rewrite rules from the leaves up:
rational n + rational m -> rational(n + m)
rational n * rational m -> rational(n * m)
symbol x -> symbol x
0+f -> f
f+0 -> f
0*f -> 0
f*0 -> 0
1*f -> f
f*1 -> f
a+(b+c) -> (a+b)+c
a*(b*c) -> (a*b)*c

This was implemented in Qi, Common Lisp, and OCaml/F# ... Lisp had the problem that the elegant solutions were incredibly slow and the faster solutions were extremely non-intuitive. link to summary.

The functional programming language shootouts which test ability for programmers to complete tasks in X amount of time and then examine runtimes I think are rather valid for testing HCI in programming languages.

Continuing studies

As an undergrad hoping to start graduate work in precisely this field next year

Have you already picked schools to apply to? There is definitely a growing interest in studying these problems in our department at McMaster University.

I'm in a similar situation

I'm in a similar situation as the commenter quoted above, and although I don't want to start grad school immediately after graduation, this is the exact field I'd like to get into. Unfortunately, it's kinda hard to find undergraduate courses like this at my university, Simon Fraser; I guess that automatically makes it a graduate level topic.

UBC and UVic both have

UBC and UVic both have strong PL sections, perhaps you could take a course at another nearby university to get some exposure.

Traditional CS/HCI

Traditional CS/HCI researchers are historically biased to topics like end-user programming, novice programming, psychology of programming, tangible/direct programming etc., which misses most of the big usability questions IMO.

I've found a lot of traction elsewhere. Many universities have a communications or information science department, where they'll have specialists in CSCW and related fields. If you view a PL as a human communication medium, you'll hit a treasure trove of theories and techniques. You may find a bit of luck with cognitive scientists as well, but that's probably too much of a stretch unless you find the 'right' researcher.

Best of luck!

Traditional HCI doesn't go

Traditional HCI doesn't go near programming often. Otherwise, most of the work has been done at places like CMU that are strong in both.

I'm still more interested in design, not HCI, where HCI seems to be more about measuring, classifying, and explaining results while design is more oriented to creating those results. Of course, there is a feedback loop, and so we generally consider both topics together.


Looking at the syllabus it seems like the goal is to encourage the students to conduct a study. Find something testable and test it.

I think the reading list gives a pretty clear indication of the general direction:

Cairns, P. and Cox, A.L. (2008) Research Methods for Human-Computer Interaction.
Cambridge University Press.
Hoc, J.M, Green , T.R.G, Samurcay, R and Gilmore, D.J (Eds.) (1990) Psychology of
Programming. Academic Press.
Carroll, J.M. (Ed) (2003). HCI Models, Theories and Frameworks: Toward a multidisciplinary
science. Morgan Kaufmann.

I'd like to see some results based on controlled field research. There are huge ranges of debates about what happens in practice between various languages in terms of productivity, in terms of maintainability, in terms of correctness. I'd love to have some data to back up all the opinion.