Escaping the Maze of Twisty Classes

Here* is a a paper that was accepted for publication at Onward! 2012 related to the crazy PL idea that I posted about a few months ago and a follow up of my last Onward! paper.

Abstract:

Programmers demand more extensive class libraries so they can reuse more code and write less of their own. However, these libraries are often so large that programmers get lost in deep hierarchies of classes and their members that are very broad in number. Yet language designers continue to focus on computation, leaving tools to solve library exploration problems without much help from the language.

This paper applies language design to improve IDE code completion that enables in-situ library exploration. Inference tackles depth by listing completions as long as the program can be “fixed” to support their selection; e.g. “pressed” can be listed as a widget completion since a widget can be a button. Influence mitigates breadth by leveraging types as completion selection models; e.g. a pressed event is more likely to be used on a button than a mouse event. We apply this design to YinYang, a language for programming simulations on tablets using touch-based menus.

* Hopefully skydrive behaves better than last time.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Please see Ken Arnold's work on Scratch

His plug-in is not in Scratch proper and not released but he has done similar stuff with search.

Mitch wouldn't let it into Scratch or be distributed, supposedly because searching for code is a too grown-up thing to do.

I've seen his work on

I've seen his work on Onward. It relies on natural language, which is a big problem I think as documentation either doesn't exist or isn't reliably. My approach is more based on ontology and probability tagging, which has its own set of limitations :)

Interesting paper

You hit on a very real problem that I experienced many times, and summarized it very clearly with depth & breadth.

[btw, this is probably an area where a user study would help: take a screencap of programmers coding and for each method/class they take a long time to find, examine how they did find it and how to make it faster -- I think you'll find for example that people know what they want and they know that it's probably there and they know some potential names, for example "Map/Dict(ionary)/Hash(Table|Map)/(Associative)Array/alist", but they have to go to google to find it]

Why did you keep the new X operator? Couldn't the X be inferred too?

Do you have a prototype implementation ready? I'd love to try it out :) Or do you have a description of the type inference algorithm that is sufficiently detailed to implement it?

Overall it was an interesting read, but I have some trouble with understanding how the language & intellisense work. You describe many aspects, but I don't see the pattern. Sort of like:

2 ? 2 = 4
3 ? 1 = 3
-1 ? 1 = -1
1 ? -1 = 1

It is exponentiation of course ;) But every time you think you got the pattern, it turns out that you didn't. Probably this will be addressed in section 4 or 5. An EBNF of your language (including preference/exclusion etc.), and a pseudocode type inference algorithm and an algorithm that determines what you display in the completion lists would help a lot. Also it seems that this is really targeted at the language you described in your previous paper, but on the other hand you are using a different language in this paper?

Thanks for taking a look!

I'm not ready to user test yet. There are plenty of experimental results that I can draw on from POMs, Miller, Hick's, so I don't really lack that kind of data. But once the idea becomes more baked, I want to figure out how to do some targetted experiements.

The new operator doesn't really exist; the textual syntax I'm using in 2 and 3 is not the graphical syntax I actually implemented. But you do have to seed your new object creation with some sort of vague type; so you need to invoke the "Make Widget" activity to get something on the screen; you don't actually get to call "new object;" new object entry points are controlled and abstractions themselves.

The language is implemented but it might not be what you are expecting. Its a graphical language in the spirit of Scratch and similar to what I described in my last paper. I'm totally releasing in May, but it will be for Windows 7/8 only.

The type inference algorithm is implemented, I'll talk about that in the tech section though I'm bumping into it early in the breadth section I'm currently writing.

Again, I decided to use text for Sections 2 and 3 rather than the actual graphical syntax because (a) text is much easier to work with on paper for reading and writing and (b) I don't want to conflate my discussion of the "idea" with the previous idea of "touch" or even behavior-oriented programming, so I try to minimize that with some simple scala-like syntax that I hope people understand based on past experience.

I'm still struggling with how to make the connection between what can be inferred and what can be listed in a well typed programming. The fact that I talk about IntelliSense a lot but never actually show it bothers me. My plan was to dump that all into the experience section, where i can show off nice menus with all the features present, but it might be too late by then.

Updated

Updated the file with a Section 3 on breadth and pref extension. This is much weirder than auto extension in Section 2 as I'm basically asking programmers to inject preference information into their class hierarchies. Also did a bunch of editing to make the story a bit tighter. Now I get to write the fun section :)

Ask not what your programming can do for your types

...but what your types can do for your programming. In other words, perhaps we should look at how types can be prescriptive and not just restrictive; not just for use in type checking, but also for inferring object attributes.

I think I've figured out how to shift the paper away from a language design dump to something more general and idea-oriented.

Conor McBride's "active types"

That resonates strongly with Conor McBride's views on typing. See for example his answer on Why do programming languages use type systems.

[...]I would like to offer the prospect that types might have an active role to play, structuring the process of program inference.[...]

To be fair, even in last century's typed languages, types had a beneficial organisational effect on programmers. This century, it's just possible types will have a comparable effect on programs. Types are concepts and now mechanisms supporting program-discovery as well as error-discovery. I think that's more than just gravy.

(Contrast this with Bracha's "pluggable type systems" position. Obviously there are radically different ways to see type systems.)

This is a great find, but I

This is a great find, but I can't find anything else on active types from Conor McBridge or anyone else. I do like the idea of an anti-Bracha though :)

Conor's vision

Conor McBride has a strong, long-term vision that he often exposes in quite picturesque terms. I suppose he has exposed this vision in similarly informal terms in other places, but it is to be expected that you won't find *papers* talking about it in great length. Conference papers are about exposing ideas in a short fixed space, and they will tell you about conceptual progress happening in that wider research programme.

That said, you should have a look at his Why Dependent Types Matter (2005, with Thorsten Altenkirch and James McKinna), more precisely the pages 16 and 17 (Sections "6.1 Dependent Types Also Matter Behind the Scenes" and "6.2 Programming Interactively"). Do not expect for anything seriously practical, though, it's not ready yet (it was not at the time of writing, and it still isn't).

I suspect you will also appreciate some aspects the conclusion, the following being a large quote:

The Epigram project is our attempt to plunder the proof systems based on Type Theory for the technology they can offer programmers. It is also a platform for radical experimentation
without industrial inertia — an attempt to discover in which ways dependent types might affect the assumptions upon which mainstream functional language designs have been predicated. We are having a lot of fun, and there is plenty of fun left for many more researchers, but we do not expect to change the mainstream overnight. What we can hope to do is contribute a resource of experiments — successful or otherwise — to this design process.

More technically, what we have tried to demonstrate here is that the distinctions term/type, dynamic/static, explicit/inferred are no longer naturally aligned to each other in a type system which recognizes the relationships between values. We have decoupled these dichotomies and found a language which enables us to explore the continuum of pragmatism and precision and find new sweet spots within it. Of course this continuum also contains opportunities for remarkable ugliness and convolution — one can never legislate against bad design — but that is no reason to toss away its opportunities. Often, by bringing out the ideas which lie behind good designs, by expressing the things which matter, dependent types make data and programs fit better.

This is exactly what I'm

This is exactly what I'm looking for. I've heard of Epigram before but was never exposed to their philosophy.

BTW, your link is broken (relative to LTU).

I've updated skydrive with a

I've updated skydrive with a pre-draft of my paper that is basically complete, though I need to do some editing and write a better conclusion; I still have 21 hours anyways.

A couple of ideas

A couple of ideas which spring to mind:

Ted Kaehler's experiment "How do you find the Sine function, if you don't know its name?": http://www.vpri.org/pdf/m2009012_fnd_sine.pdf (found through http://vpri.org/html/writings.php). Uses the unit tests of a piece of code to infer what it does, and whether it's what we're looking for. My gripe with this is the manual weeding-out of impure code. This wouldn't be an issue with a pure language like Haskell, or one which makes all external dependencies explicit like NewSpeak.

Susumu Katayama's Magic Haskeller: http://nautilus.cs.miyazaki-u.ac.jp/~skata/MagicHaskeller.html Takes the type of a desired function and a set of examples. Enumerates possible functions from simple to complex, until it finds one which matches all examples. The only gripe I have with this is a hardcoded timeout. I think it would be more elegant to do the enumeration like Jurgen Schmidhuber's FAST algorithm http://www.idsia.ch/~juergen/coltspeed/node2.html

You could do this

You could do this efficiently, even at internet scale, if you let the compiler generate the test cases for you. i.e. you write down a function and what value it produces on a standard set of inputs (that are a deterministic function of its type signature. Seems like this would be a good way to augment a type based search like Hoogle. In addition to helping find new code, it could automatically suggest optimizations and/or simplifications to existing code.

There are a few semantic and

There are a few semantic and test-based code search engines out there, but...its not what the programmer usually wants to use in place of code completion to discover abstractions, these are completely different tasks (that I cite anyways).

Rather, they want to browse (perhaps semantically), follow various links from things they think are related, and eventually discover an abstraction that sounds like what they want (they can read documentation to back up their understanding, for sure). Something that is more natural to us than semantically specifying the functionality we want, which is often quite impossible since we aren't even sure of what we want.

Type-to-type code completers (which sounds simalr to Magic Haskeller) are much more interesting. I really haven't figured out how to do this yet from a PLD perspective, behavior inference only allows us to infer a method call to use some of its properties (or if another method call depends on it). But its something I'm definitely thinking about right now.

Done and submitted. I had to

Done and submitted. I had to add a more edgy abstract though.

Accepted for publication in

Accepted for publication in Onward! 2012.

abstraction and library-management

For my part, the reason I tend to focus on powerful abstractive mechanisms rather than library-management is that I've tended to view the accumulation of libraries not as a consequence of abstraction, but as a symptom of underpowered abstraction. Weighing these alternative views against each other is an interesting (and challenging) topic for exploration. These are the sort of really primal factors I'd love to get a theoretical handle on.

My general model of PL use is that one extends outward from a base language by "abstraction" —which I mean in a very general sense— and typically each abstractive step introduces problems that accumulate and eventually impose practical limits on how far away from the base one can get. That practical limit, I call the "radius of abstraction" of the language. Tools to manage libraries, I've seen as a sort of crutch, a kludge to boost the radius of a language whose real trouble is a feeble underlying abstractive mechanism; I would therefore explore the underlying mechanism rather than the library-management tools, reckoning one might get, metaphorically, an improvement of asymptotic class from the underlying mechanism, versus merely a constant factor from library-management.

The question, from my point of view, is whether using a type system as a GPS for software libraries (as it were) is more in the nature of souping up the underlying abstractive mechanism (asymptotic class), or library management (constant factor).

Type-directed inference is definitely language design

I think there is a kind of "width vs. depth" situation here, but I will refrain from using this vocabulary which is already heavily overloaded (in particular, Sean's paper uses it in a way that may or may not be compatible with my present point).

Using types to help library use gives a more than constant factor gain; imagine you have a global codebase of all code written in your language, with libraries for N different problem domains, only a tiny subset of which are exercised in the application you are currently writing (let's say there are K of them). Type-directed inference can help you by reducing the area of concern of the automatic code assistant from the N potential libraries to the K' libraries that have types compatible with the values you are currently manipulating. For domain-specific values, K' will be quite close to K, which will be many order of magnitudes smaller than K. Of course, when manipulating very general, codebase-wide types (such as "lists"), there will be much more matches, so inference will be less helpful¹.

In a nutshell, type-based inference allows you to "scale" all kind of helper tools by restricting them to you problem domain(s) rather than any code ever written by the crowd.

¹: Sean has tried to tackle this problem as well, by including a notion of "Influence" in the language design to designate the simpler / most frequence choice on common types and help prioritize suggestions.

I haven't gotten this far yet

Right now, inference is more of a way to flatten the space of symbols to choose: you can choose a symbol S now because all its dependencies are brought in automatically. We could then "hide" the dependencies and rely on the programmer choosing S, so it would look more like what you are talking about, but in the paper the programmer is still exposed to K with only influence support to temper that; they can be oblivious to the dependencies but the dependencies are not hidden from them. Its an interesting question when we could reasonably hide a trait from being selected directly.

I'm looking at how we can go further. In particular, see my recent post on automatically implementing abstract methods to allow programmers to underspecify their programs.

I'm confused. The proposed

I'm confused. The proposed language includes library management as a language concern rather than as a tool concern. Enhanced GPS in this proposal is more a function of the language; there is no such thing as library management in this language. I would also argue that we don't want libraries; rather we want just one library like we have one web or one wikipedia that support multiple content produces strung together by a single index (wikipedia or Google).

Nice... ;-)

"Ask not what your code can do for your types, but what your types can do for your code!"

I think the idea of code completion depending on the types in the context already made it worth publishing. That alone could trim the example, 290 options for member methods, down to a few under circumstances. Nice idea, nice catchy phrase. Kudos.

Hm

I must say that, while I agree that the formulation is catchy and extremely nice, this idea did not seem so novel in itself, because I have already seen it discussed before, see for example Programming as Collaborative Reference from Oleg and Chung-chieh Shan last year, or Conor's quora answer Why do programming languages use type systems? (Feb. 2011):

Crucially, also, types structure the search for programs in useful ways, provided your editing environment offers you type information and makes it easy to select type-appropriate choices. [...]

This position constitutes a change of viewpoint in the purpose of types. If programs worked just the same with the types rubbed out, then types would represent a form of piety often inadequate with respect to testing. It's when types contribute information to algorithm selection, design statements which program definitions need merely refine, that they constitute a significant win.

I was not surprised by this idea in Sean's article, but I appreciated the realization¹, in a setting (touch-driven programming) that is an excellent excuse for practical innovative UI exploration -- while I'm confident it will in fact be beneficial much more widely.

¹: there is an important difference between talking about something, even in a very precise way, and actually writing a paper about it and publishing an implementation. And it's very good that there is this difference, because otherwise, with all the thought experiments going on here on LtU and elsewhere, we wouldn't have much left to do...

You pointed this out before

You pointed this out before and I wonder how much I was influenced by Conor's writing. I didn't cite them though, since I couldn't find anything good to cite; I'll try harder in the revised version. I haven't really grokked the collaborative reference work very well yet, but it seems related and is worth at least a mention.

The work presented in this paper is fairly orthogonal to touch; I believe the work is necessary to support touch but is useful in a broader context. However, this is what I've implemented so far, I'm dreaming about going back to textual languages but I'm not quite ready for that yet.

Convergence of ideas

Indeed, conor has a very strong vision that he often exposes during talks, but the publications themselves are focused on more technical questions that are the beginning of his research agenda.

I don't know what your influence were, but in any case I think that those idea of "code inference" are currently in the air, and I am not surprised that several people work on this more or less simultaneously but with no obvious cross-influences. There was also related work at PLDI this year (I don't know much about it though, I'm not the one in china!), "Type-Directed Completion of Partial Expressions" from Perelman, Gulwani, Ball and Grossman.

There is also another brand of research on coercions (eg. Mike Hicks work on the Coco project) which is converging towards type-directed program inference. In fact, under the influence of coercion research, I'm myself considering to make a small writeup about that. My idea is to test which types have a *unique* inhabitant modulo (an approximation of) observational equivalence, and let the user unambiguously determine a program term from that. Inference propagates type information so this can be very lightweight, but you can allow the user to annotate more fine-grained types (eg. linear arrows instead of usual arrows, or better, non-computational / structure-preserving arrows as currently used in coercion research).

PLDI was fun

I'm a big fan of Daniel Perelman's work; I'm working on a project this year (hopefully) that looks more aggressively in a related direction of how we can mine and leverage API selection models for C#. However, I still think in the future that language design could better support these techniques vs. trying to do it after the fact with tooling.

PLDI in China was exhausting, I much prefer to go to other cities for conferences then to have them come to my backyard :)

The idea that part of types'

The idea that part of types' benefit is not in partial correctness but (perhaps even primarily) in building programs is not new. This idea, although it has perhaps only entered academia recently, has been mainstream for at least 10 years. When practical people rave about types, 90% of the time they mention intellisense. The trouble is not in recognizing this on an abstract level, but in building a concrete implementation that works well (which is what your paper is really about, I think; the system described in your paper is not merely an example to illustrate the abstract idea).

has been mainstream for at

has been mainstream for at least 10 years

I can agree largely with this (though I can't speak either way for academia re: the surrounding remark)

Just make it at least 18 years, actually: as I witnessed, both Delphi and VB had it ever since 1994 (VB released in 93, Delphi in 95). Delphi's was somewhat better, because in the context of a "better" language (just IMO).

Ah? According to my own

Ah? According to my own research Intellisense didn't make it into VB until 1996/1997; how was VB supported in 1993? When did code insight come out for Delphi exactly? I'm interested from a historical perspective, from what I can tell, the first instance of code completion actually came out in 1986 (in a little used language called Alice Pascal).

Sorry. You're correct, my

Sorry. You're correct, my memory failed a bit though I supported the Delphi and C++ compilers since Delphi 1 in 1995. Code completion more likely appeared with Delphi 3 only, in 1997, not sure about Delphi 2, a year before.

As for VB there was limited code completion support in its early versions (VB4 or 5 ?), that Borland couldn't ignore as a challenger.

I confused (chronologically) Delphi's code completion inception with its RTTI, present ever since v1 to support the object inspector, the form designer, as well as custom property editors, custom designers, custom IDE "experts" (a.k.a wizards), and so on (*)

AFAICT, conceptually speaking, what one can still find today in Visual Studio's VSX seems like nothing really new and had been studied extensively by Borland R&D (for one I know) in those mid 90's. Even the idea of extensible modeling support, in Delphi 5 (1998/99), predates what Microsoft is now getting, finally, somewhat "mature" for the last 2 years (i.e., with the UML support, that I personally don't care much about, and the visual DSL modeling tools and the ModelBus, that I find more attractive/interesting. YMMV)

Still, these ideas are 15+ year old anyway.

EDIT
(*) now that makes me remember, when Delphi 3 was released, of making a PoC of a design time extension where I would intercept all the calls/queries from the object inspector to the RTTI of the components sitting on the form designer and crafting up dynamically "pseudo properties" (calculated or editable) held on a proxy object impersonating the components. Mostly a hack I never found a real application for (lack of time to dedicate more), but that I found was pretty telling about how far ahead was Delphi's static types and compiler-generated data for introspection. Another of these hacks was also to enumerate ALL of the RTTI of the Delphi IDE itself (since Delphi version N was developed in version N-1 and always backwards compatible at the binary/PE format level) ... during a programmer design/edit time session (e.g., for rev. eng.)
Plus, I always managed to have those, with enough little effort in the coding, free of General Protection Fault re: the instances lifetime, even w/out a GC around (most of the design time activities being just in the main UI thread/Windows message pump at the time, that did help, I suppose). "Fun" playground memories :)

Didn't know about Alice

Didn't know about Alice Pascal. Was it a visual IDE, with designers, extensibility APIs and such ? IMO, Delphi was appealing right from its early days because of its good support at edit/design time for an RTTI generated by the compiler, and an agnostic IDE otherwise, discovering static types as the programmer crafts them.

Contrast this with the VB approach at the time to do OO/RAD in these respects, which was much more ad hoc and coupled to those so called "OCX/ActiveX" that needed to be developed elsewhere before integrating into VB ... namely, in C++, with ATL/MFC/COM headers, etc.

A "pleasure" to test and debug for what I can recall...

Alice Pascal was a

Alice Pascal was a syntax-directed language, so the editor was more graphical and didn't support free-form text. It didn't support so much code completion as it supported selection of an identifier via a menu. If you think about it, its obvious for this kind of idea to appear their first. It probably isn't really what we'd call an IDE today, but more of an integrated language editing/execution experience.

Final version

Posted on the skydrive link. I've been trying to figure out the type semantics for this work, and I have a fairly decent description in the last section of the paper now, even I still lack a good enough understanding. How many type systems depend on Dijkstra's Algorithm anyways?

IDEs used to be better at

IDEs used to be better at this, back before HTML. IDEs such as Think C had terse, but useful, help systems, where you could mouse over a call and get a small popup with the call parameters.

When HTML and browsers came in, almost everyone went to help systems which just throw the user into the middle of some manual page. Often the manual page isn't designed to be read out of sequence. This can be less than helpful.

What does "Influence mitigates breadth by leveraging types as completion selection models" mean?

A "completion selection

A "completion selection model" is a model of what completions the programmer is likely to select. I would have liked to have said "completion probability model" but I'm not computing real probabilities (that add up to one), rather we compute road-like distances. Given a model of what the programmer is likely to select, the menu can emphasize those choices (by burying less likely choices), and hence breadth becomes less of a problem (as long as the model is accurate with respect to what the programmer, which is definitely not garunteed).

Rosy tinted glasses

Replying to the first part of your comment, I think you might be wrong on causation. Libraries have just gotten to be very large since the days of Think C; HTML documentation was really the only way to keep up with library size. Also, API documentation isn't really sequenced, I'm not sure where you are getting that from.