Latest usability for polymorphism?

Thinking about Scala recently (trying to grok now self types are used), I found it interesting to note that there are many features of a language's semantics which must play together well. E.g. there several types of polymorphism. I wonder how best a language design can make all the semantics usable (as in usability) in the three contexts of: writing code, compiling code, and running code? I think for me Scala might have gone too far, as others have said. Sounds like Fortress is considering self types as well. I guess I wonder if there will some day be a way to abstract the semantics somehow to keep the usefulness but lower the mental load on the developer.

I expect some people think this sounds like a weak developer looking for an easy way out; that isn't totally true in my case. :-) At the very least I think a devil's advocate could suggest that something complicated won't get far in industry.

[edit: hm, apologies if this sounds too much like already discussed thoughts. I guess I am (re) asking the question how far along the strict typing path to go before things collapse under their own weight, and people won't use it.]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Backgrounder

This is just a quick backgrounder on Scala's self types for those reading and discussing but who don't want to read the full Scala spec. Scala's self types allow mutually dependent abstract classes or traits.

More concretely, if every instantiable subtype of A must be a B, then in Scala (and other similar languages) you write something like
trait A extends B {...}

With that, methods written in A may use methods(abstract or otherwise) defined by B. Conversely, if every B is an A then
trait B extends A {...}

But if every B must be an A and every A must be a B then that pattern doesn't work. In Scala you would use explicit self types instead.
trait A {self:B => ...}
trait B {self:A => ...}

You can then have further traits A1 and A2 that extend the A trait and similar B1 and B2 traits that extend the B trait.

With that, this code is illegal
class Foo extends A1 {...}

But these are all legal
class Foo1 extends A1 with B2 {....}
class Foo2 extends A2 with B1 {....}
etc.

Down the rabbit-hole...

I'm afraid this will quickly become a flamewar, but as one of the first commenters, I feel that I have the privilege of indulging myself...

You raise a reasonable question, but an unfortunate one, in that I don't think there's any way to actually answer it. In other words, it's a conversation-stopper (or at least a "productive-discourse-stopper"), and as such it's sometimes a good rhetorical move, but also a frustrating one.

As noted in the Blub Paradox, the problem with this argument is that it's almost impossible to predict how you'll feel about a particular feature until you've climbed the learning curve. All you have is people on one side saying, "It's worth it," and people on the other saying, "Looks stupid from here."

This is also, incidentally, why I really don't buy that PL research needs more usability studies. Maybe in a few well-trodden niches this makes sense, but frankly, I think it's putting the cart way before the horse. Usability studies don't work well for any artifacts with an inherently steep learning curve. For instance, can you imagine the result of a usability study on the first bowed string instrument? I mean, really, can you imagine a focus group on the violin? Give me a break. But even now we really don't know of any easier way to do what people can do with a violin.

This applies in any situation where there's an inherently long time to proficiency: learning a musical instrument, learning a natural language, etc. It becomes impossible to do meaningful comparison of difficulty, for instance, and it becomes impossible to predict whether a particular design will be any good, without first investing a lot of time and thus biasing yourself beyond all hope! You might simply say that it's also impossible for society to accept change, then, in such a design space, but for every theremin (failure) there's a TB303 or an electric guitar (wild success).

So, all you can do is say "I think that once I master this tool, I will love it" and then set out to build and master your tool with (if you're lucky) a group of like-minded craftsmen. But until you've mastered your tool, you just don't know.

Frankly, I think self types are a very powerful and valuable addition to the OO toolbox. But I wrestled with the idea for awhile before understanding it. And I'm just one data point.

The question of how much learning curve the mainstream market will tolerate is a difficult one, and it can only be answered empirically. I doubt any sufficient model exists to make any accurate theoretical guess. (I know there are rules of thumb like "any change must produce an order-of-magnitude boost in productivity in order to be widely adopted" etc., but when it comes to languages, there are IMHO just too many variables.)

Good points

I'm trying to re-think what the nugget is of what I'm nebulously thinking, in light of your comments.

I still do think usability could be one of those big deals for programming language design and implementation, but I see your point that it is hard to know what the right answer is. How does one deal with the fact that things really are complex, and yet abstract enough to make things not complex, and yet not broken in some crucial way (performance, correctness, etc)? Maybe that's way too Star Trek to really happen.

Perhaps this just all boils down to asking, "when do you think using [say] polymorphism leads to more trouble than good?" where trouble can be cognitive load as well as runtime exceptions (or "message not supported" responses).

We call it typiiiing!

but for every theremin (failure) there's a TB303 or an electric guitar (wild success).

I'm pretty sure Roland's designers weren't thinking "Acieeeed!" when they designed a cheap bass guitar emulation! In fact, we're all pretty lucky they let the resonance knob turn that high when a smaller range is far more natural for bass guitar purposes. But if others're doing things you like with it then it may just be worth playing with.

Well, they searched the world over...

Why PL needs a lot more usability testing

I just created an account to reply to the above post. Just for background purposes, I teach human-computer interaction at Carnegie Mellon University, and am part of the School for Computer Science.

I believe the above poster is making a similar argument that Doug Engelbart made a long time back about the difference between tricycles and bicycles. More specifically, if ease of use was all that mattered, then we would all be riding tricycles.

However, I believe that the above post shows a common misunderstanding of the nature of user tests, and human-computer interaction more broadly. Specifically, HCI is not just about ease of use for "walk up and use" interfaces. We advocate that designers should understand the context of use, set appropriate goals, and measure that we are achieving those goals. Like security or performance, it is holistic and something that has to be intentionally designed in from the beginning, rather than slapped on at the end.

There have also been several studies looking at expert performance of user interfaces, spanning several weeks or months, to measure such things as learnability and overall performance. I believe these kinds of studies could address many of the concerns the above post makes.

Human-computer interaction can also provide new insights for programming languages. To give a concrete example, I will refer readers to the Natural Programming project at CMU, which is looking at how programmers (and non-programmers) already work and developing better tools to streamline those existing practices.

This project has led to better debugging tools (supporting "why" and "why not" questions, which turned out to be how every programmer studied phrased their debugging questions), a better understanding of what kinds of APIs are easier to use (it turns out, quite surprisingly, that objects with lots of constructors don't fare as well as those with simple ones), and better ways of validating that data is correct. Companion projects at other universities have also examined, for example, the role of gender and programming (finding that men tend to tinker a lot more when programming and debugging, which may suggest better ways of teaching computer science).

So, in a narrow sense, I would agree that user testing for ease of use is not enough, but I would also argue that human-computer interaction has *a lot* more to offer programming languages than may appear on first glance.

Studying the way people think

A lot of what you say seems to be oriented around HCI's ability to get a little bit into people's heads and figure out what they're thinking as they do something. As you say, this alone can be quite helpful in driving the design of interactions.

But Matt's point seems to be this: how do you do usability studies about something that people literally cannot think about until they've mastered it?

It seems you can do some cognitive modeling in the study of the pedagogical techniques (e.g. how do students go wrong when we teach them that monads are like containers, spacesuits, assembly lines, elephants, or category theoretic structures with simple algebraic properties). But it's hard to see how to study whether a concept creates too much cognitive burden given its benefit or is stupendously useful given some investment of skull-sweat.

I agree that HCI can offer a

I agree that HCI can offer a lot. The number of constructors result is very interesting actually, as I myself came to the same conclusion awhile back. I prefer functional constructor semantics as a result: only 1 constructor!

I think the "HCI is not just about ease of use for "walk up and use" interfaces" point is important. HCI can help improve current interfaces for the experts that are already programming.

Logical education

I'd have to say that part of this is the nature vs nurture issue. HCI, in a sense, exascerbates the problem, in the sense that it is a problem.

Usability studies dumb down what is hard to understand, regardless of its actual complexity. To reference Piaget, if kids lived in mathland, we'd all be fluent in math. The closest that someone can come to living in logicland is to be a programmer and work in predicate logic languages. The rest of the world lives in a different country, socially and cognitively, wherein very little of their thought processes are actually rational. If you study the statements and arguments of even very well educated people, they typically cannot make a logically coherent point in their own natural language, let alone some formal system wherein ambiguity would be less tolerated. The fact that the english "or" can mean "or", "xor", or (in some cases, such as when demorgan's law is concerned) "and" is a testament to how irrationally people think.

Instead of dumbing down languages to the point that or, xor, and and are the same operator (hyperbole at the moment, but seemingly all too looming), we should educate the public on rationality on a larger scale. Public education (being what it is) cannot possibly work for this, because public education is designed in such a way that nothing ever gets learned except in some statistically irrelevant cases. Instead, we need to promote factors in society wherein rationality are rewarded.

Complementary, not opposed

I don't see why HCI and rational thinking are entirely at odds. Our ability to think using rigorous logic is ultimately limited in terms of speed, capacity and accuracy else we wouldn't need computers. Given those limitations, doesn't it make sense to make interfaces that play to our strengths without demanding too much from our weaknesses?

Or, to put it another way, I could invent a syntax for Prolog based on Gödel numbers. I could prove it formally equivalent. But would you ever want to use such a monstrosity?

I think people are reading

I think people are reading for more into this than is being said. No one ever said anything about dumbing down languages. The HCI researcher instead said that they study how real programmers solve real problems, and then help design tools to assist and streamline that same process. How can that be a bad thing?

Welcome to LtU!

Welcome to LtU! Thanks for the reasonable and balanced response. I didn't mean to imply that there is no place for user studies in programming languages. I'm familiar with some of the work your group has done, and in fact, Brad Myers visited my research group last year and showed a few impressive tools. I'm also familiar with his ICSE07 paper on the factory pattern, which is a very nice piece of work. Also, I would never pretend to be an expert in this area, so I'm sure there is a lot of good stuff that I'm not aware of.

On the other hand, I do subscribe to Doug Engelbart's argument, to some degree. I don't think it's impossible to do user studies of these kinds of artifacts, I just think it's rarely economical. If you grant the existence of artifacts or skills that take upwards of ten years to master, I don't see a cost-effective way to study the quality of a substantially new design. Furthermore, I suspect it would be difficult to find subjects willing to submit to such a long course of study without evidence that there will a market for the resulting skill. In any case I'm not aware of controlled studies that attempt to track a group of language users over such a long time period (although I am of course aware of such longitudinal studies in other fields).

There is only one problem that I see in principle with longitudinal studies of this nature: they fail to capture the network effects of widespread adoption. For instance, imagine giving handful of users ten years to play the first set of violins. In the absence of qualified teachers, centuries of established discourse on technique and a global culture of violin virtuosity, even after some years you're unlikely to find excellent players who are happy to have had the opportunity to study the instrument. In other words, I believe there are artifacts that take time not just for individuals to master, but for society as a whole to master.

Incidentally, I know that some approaches to HCI are based less on direct observation of subjects using a particular design, and more on evaluating the usability of a design in the light of some more general models of human cognition, learnability, usability, etc. (Very roughly akin to the observation that a seven-fingered glove is never likely to be a big seller.) I'd like to be clear that I'm not talking about that kind of work in this post (or the last), I'm talking very specifically about user studies of the "focus group" variety.

I do see a strong benefit to empirical usability studies in some cases:

  • where a short-term measurable benefit is expected,
  • where an existing pool of target users can be identified,
  • where a range of choices exists in the wild already, and/or
  • when short-term market acceptance is critical to success.
Mostly I would expect this to describe incremental changes to existing artifacts, e.g., "What is the ideal dimension and weight of piano keys (for expert players or for young children or some other delineated group)?" rather than, "Which is harder to master, oboe or violin?" I wonder if you would disagree with this characterization? Would you, on the extreme end, argue that it is possible to design usability studies for natural languages, and to determine in this way whether some designs are fundamentally more learnable or usable than others?

I hope that at least this clarifies my position somewhat. Of course I'd love to find that we mostly agree, but in any case I definitely do not mean to denigrate the really great usability work that your group and others are doing with programmers and programming systems. I'm sorry if my last post was too strong.

And finally, I'd like to invite and encourage you to post some of the good papers in your area to LtU. We've often tried to encourage more content in software engineering and empirical language studies, but few of us follow the field closely enough to rectify the problem.

Programmers build their own tools

I think the fact that software developers build their own tools to facilitate their work mitigates the need for a lot of usability work. If something is too time-consuming, a developer will automate it.

Certainly studying successful developers, and publishing their techniques is worthwhile, but then where would you go from there?

That said, I think a lot of tools are highly undervalued though, and are consequently ignored even by some advanced software developers. For instance, IDEs with "code completion" for new developers or exploratory programming with a new library.

Your comments call to mind

Your comments call to mind Dave Walend's three year old blog entry about Sharpening the Axe or Shaving the Yaks?. Dave stunningly captures what conditions provide the greatest opportunity for a developer to automate time-consuming activities:

I've got this great new project at work right now. The deadlines are very gentle and the boundaries are very vague. I am to "make the department's job [experimental planning systems research] easier" and I need to be done by September. I've started puzzling through what people in our group need by asking them questions and filling in a spreadsheet (more a letter to Santa), adding columns for my own thoughts, and sieving for things that look like real goals.

How many developers do you know who are not only successful, but have a "great new project" where "the deadlines are very gentle and the boundaries are very vague"? Honestly, are there any papers publishing those developers techniques?

The interesting thing about IDEs with code completion, from a usability standpoint, is that it necessarily dictates whether you pick top-down or bottom-up. Charles Petzold lays out this argument rather effectively in his NYC.NET user group presentation Does Visual Studio Rot the Mind?:

For example, for many years programmers have debated whether it’s best to code in a top-down manner, where you basically start with the overall structure of the program and then eventually code the more detailed routines at the bottom; or, alternatively, the bottom-up approach, where you start with the low-level functions and then proceed upwards. Some languages, such as classical Pascal, basically impose a bottom-up approach, but other languages do not.

Well, the debate is now over. In order to get IntelliSense to work correctly, bottom-up programming is best. IntelliSense wants every class, every method, every property, every field, every method parameter, every local variable properly defined before you refer to it. If that’s not the case, then IntelliSense will try to correct what you’re typing by using something that has been defined, and which is probably just plain wrong.

Turn it off, then!

If a tool gets in the way of doing work, don't use it. No need to change the way you work just to suit the tool.

(That said, there are many reasons to prefer bottom-up design, especially for large-scale projects, that have nothing to do with the peculiarities of a particular tool...)

You got me!

Ah, you caught me. What you've basically done is point out that there is a difference between causation and correlation. However, what does it mean that so many in industry say that "Enterprise grade" tool support requires code completion? There's something intuitively appealing to those people. I can make the same argument the other way: UML depicts common top-down scenarios, like module hierarchies.

Realistically, I use a combination of top-down and bottom-up to build systems.

Turning off completely the bottom-up tools is not something programmers are likely to do. In fact, it seems I run into more and more programmers who use IntelliSense, ViEmu, Resharper, and CodeRush together! The most common scenario where you might not want to turn off code completion would be stub generation. Obviously, if you use top-down, then many of these stubs can be generated for you. Yet, at least some users seem to love defining their own code completion templates and entering information into the skeleton structure of the code. A common scenario is generating unit tests.

What you see here is a scenario where the programmer figures out a way to manage his or her labor better, but hasn't figured out a way to eliminate it. And duplicating these skeleton structures everywhere adds code bloat and makes refactoring more difficult, if you think about it.

However, what does it mean

However, what does it mean that so many in industry say that "Enterprise grade" tool support requires code completion?

Well, I never said anything about "enterprise grade" when I mentioned code completion, mainly because I think it's a meaningless term. Code completion is useful though, but perhaps that's merely because I'm stuck using C# for now. Don Syme of F# fame agrees with me though. If code completion is still useful in an expressive OCaml-like language, then perhaps there really is something useful there.

However, I agree that tools which help you reproduce the same patterns over and over are just a symptom of an insufficiently expressive language. Then again, you may not recognize the underlying pattern until you've reproduced it a few times, so refactoring support still helps here.

If the "pattern"...

...is just a long name, then what is the refactoring? A shorter name?

That might save some typing, at the cost of obscuring the meaning of the program.

If the "pattern"... ...is

If the "pattern"... ...is just a long name, then what is the refactoring? A shorter name?

The patterns are not reusable abstractions, but repetitive code (copy-pasted code). The refactoring is thus a restructuring of a program to make use of additional parameterization to reuse as much code as possible. Yes it can obscure program meaning somewhat, with the benefit of code reduction and a correspondingly reduced maintenance burden. That's generally considered a good thing. Perhaps I'm not understanding the point of your comment.

Exactly

It's difficult to come up with a more convincing way to say what you just said.

I'd tinker with what you said just a bit and caution against using the phrase "the benefit of code reduction". The thing that reduces the cost of maintenance is the elimination of labor. This can include hidden steps in the system. Sometimes, when you write a block of code, it's hard to foresee how it will effect maintenance until you're stuck making a change. For instance, although a for loop can consolidate a series of statements, all those statements can be further functionally dependent on program data. In some cases, all the for loop does is EXPAND the hidden expression and revealing that functional dependence in memory.

Usability factors

Hm. What are all the things that go into cognitive load when writing and reading and maintaining code? For example, there's the verbosity of the code (Java) vs. the terseness of the code (Haskell). I guess the verbosity is related to what abstractions are easily available in the language both in terms of semantics and semantics. Like, if things really are Turing complete then you can do whatever you need to in Java, it is just quite likely to really suck to do it, even if you have all the fancy-pants Eclipse plugins you can shake a mouse at.

"Rarely economical"?

I don't think it's impossible to do user studies of these kinds of artifacts, I just think it's rarely economical.

What does "rarely economical" even mean? What context are you using it in? A lot of scientific research about end user habits rarely has an economic payoff. For instance, the observation that most users travel less than 100 miles per day and can therefore afford hybrid electric vehicles (HEVs) and reduce dependence on gasoline. Plug-in HEVs can get 100 miles per day. Although it is true, there is no economic payoff, because there is no environmental factor facilitating social change. Governments push back zero emissions mandates further into the future, allowing civil engineering researchers to pursue alternatives to Plug-in HEVs, like cars with hydrogen fuel-cell batteries. Ultimately, we will have a rich amount of alternatives to choose from in the car market.

Similarly, researchers push back and ignore certain usability problems in programming languages! Edward A. Lee wrote a nice example of these usability problems in a position paper titled The Problem with Threads. In my own words, he was discussing how we've pushed back regulatory mandates. We know non-determinism "is bad for the environment", yet we do it anyway. Now that the processor industry is shifting design to adding more cores, people are being forced to rethink the design of concurrency models. -- Just as I'm sure we'll eventually be forced to consider alternatives to gasoline-powered automobiles.

I do see a strong benefit to empirical usability studies in some cases:

These seem VAGUE for me, in a BAD way. I'm TOO DUMB to imagine what these cases actually are in terms of where it improves quality and/or saves man-hours on a project. Moreover, the ideas that catch my eye in usability (like subtext, linked editing, relo, &c) seem to focus on maintenance. I can't focus on cases; I focus on activities. The quote I've said to Jonathan Edwards numerous times is: "Let's not manage labor, let's eliminate it." This is such a basic idea for usability! We all know the dangers of Mystery Meat Navigation, Features Only Accessible Through Sub-Menus, &c. We all acknowledge that what we're trying to do here is manage the user's labor rather than eliminate it. It's a grave mistake, and such an important issue.

How Much Learning? We use English, don't we?

"...how much learning curve the mainstream market will tolerate is a difficult one"

If we used one programming language our entire lives and started learning it at the age of 3, we'd be pretty good at it by the time we were 23. We use English that way and it's really complex.

So, I know English, but I don't really know if it's better or worse than Chinese or Spanish. As the man said, by the time we learn it we're so used to it we can't evaluate it objectively. Language effects the way we think. Somebody famous said that.

If we had all been using COBOL, and only COBOL, for 30 years, we'd think in COBOL and it would seem Natural and Right. Using any other language would seem 'impractical'. And it would be, by then.

Lord save us from COBOL.

Successfully running in the other direction?

Are things like Adam & Eve (of course previously on LtU) successfully eschewing polymorphism in favour of generics? (Gotta find some free time to try such things out.) Presumably all tools can be useful under appropriate circumstances, and dangerous if one tries to apply them elsewhere. So perhaps OO-and-polymorphism has its place and the generic approach has its (although Stepanov wouldn't agree, I guess).

Terminology

Note OO people tend to use 'polymorphism' to refer to subtype polymorphism while functional people tend to use the term to refer to parametric polymorphism (~generics).

And 'generics' is so

And 'generics' is so horribly overloaded it's not funny - it's always 'that new way to generalise that the community doesn't have a nice name for'. Type classes and the like can muddy the water with 'polymorphism' further, too.

It Doesn't Help...

...that some folks say "inclusion polymorphism" and others say "ad-hoc polymorphism" to refer to the same thing.