An Accidental Simula User, Luca Cardelli

An Accidental Simula User (pdf slides) (abstract)

How o-o snuck behind a functional programmer and hit him in the head.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Cool

Wonderful stuff! Gives a history of the work leading up to "A Theory of Objects" and beyond, with a strong focus on the (elusiveness of the) relationship between objects and LC. Lots of interesting details. Here's a nice bit of out-of-context bait: "objects are useful [...] types are not useful".

I'd like to see a piece like this that also talks about category theory.

Very Good

It really is an excellent history and, being a personal history, reads like one, including the "too many/too much about types." In fairness, when anyone is exploring the boundary like that, they can be forgiven for feeling like giving up on half of the endeavor (and in that respect, I'm sure that I owe Erik Meijer an apology for getting on his case unduly for his "I'm a recovering typoholic" comments here on LtU). But I think that head-pounding had to happen to arrive at "A Theory of Objects," which informs "A Nominal Theory of Objects with Dependent Types" (see reference #1), aka "the vObj paper," which provides the formal/theoretical foundation underlying Scala. Likewise, I doubt we'd have OCaml (see reference #1) without "A Theory of Objects." Finally, putting subtyping on a formal basis suggests that Darwen and Date are on the right track with "Databases, Types, and the Relational Model," reinforcing that there is no impedance mismatch between the subtyping relation and the relational algebra, but only between "objects" (as generally implemented) and SQL. Unfortunately, I don't have my copy of "Databases, Types, and the Relational Model" in front of me, and so cannot confirm that they make reference to "A Theory of Objects."

In other words, the field's intellectual debt to Dr. Cardelli is enormous and perhaps mostly outstanding.

Types & head-pounding

But I think that head-pounding had to happen

Reminds me of the slide from Simon's recent "A Taste of Haskell":

"Type classes over time: Incomprehension -> Wild enthusiasm -> Despair -> Hack, hack, hack -> (10 years later) Hey, what's the big deal?"

scala type inference

It is interesting that you mention Scala in context of type theory. Perhaps you can clear up something for me. Scala requires more type annotations than Haskell. Is that because sub-types themselves are an obstacle to type inference or is it just a design choice Scala designers made?

Good point

It is interesting, isn't it? I'm reminded also of Tim Sweeney's point in his POPL presentation that type inferencing doesn't scale, and in a clarifying post here indicated that meant that you might have to declare the types of arguments to and return types of functions. Felix also requires the declaration of argument types. I suspect, but haven't done any investigation to confirm, that it has to do with covariance and contravariance issues. In general, I think that Scala's type system is even more expressive than Haskell's (at least without extensions), so this neither surprises nor particularly bothers me.

Function type annotations

Function type annotations seems to be a pretty common price for adding more powerful features to type systems. E.g, GHC will happily use arbitrary rank function types, but doesn't try to infer them. The type system MLF extends Hindley-Milner to type anything expressible in System F, with the guarantee that inference succeeds on a typeable program as long as you annotate all function arguments. I don't know of any work connecting required annotations to subtyping in general, but MLF does use something akin to subtyping to provide principal types. On the other hand, inference for System F without subtyping is already undecidable without annotations.

Often times, you WANT to annotate function types

After all, the signature of a function specifies the interface to that function; and as such is part of the problem specification (and is best stated explicitly by the programmer) rather than an implementation detail which is best determined by the tool.

Say I have a function which computes the mean of an array of numbers; and I write code which passes it an array of purchase orders. 'Tis better if I'm told by the compiler that an array of purchase orders is invalid input, than if I am told unification failed because addition and division on purchase orders is undefined. :) Type inference (and/or dynamic typing) is often a good thing in my experience if it avoids numerous redundant type declarations of a function's local variables, or a record's private attributes--no more "Foo myFoo = new Foo" and such. But type annotations are often a good thing too, especially at boundaries between subsystems.

unification failed because

unification failed because addition and division on purchase orders is undefined


Annotations may be even more important if these operations are defined...

If we had IDEs, one of the

If we had IDEs, one of the first things I'd want is an option to show inferred function and/or parameter types - and means of putting them into documentation without adding them to the source. Annotations're additional constraints, and I don't necessarily intend that - they can be a real PITA if you're transforming a pile of code, forcing you to do more work to correct them.

Being told that unification failed because "that type isn't a number" seems reasonable to me? It tells you not only that the type in question is invalid input, but what kind of input is valid.

You don't get told

"the type is not a number". There are no annotations claiming that the type is a number. Type inference worries about matching types to operations (functions), and will complain if the body of a function contains operations which are not defined over that function's arguments. In this case, calculation of the mean requires addition and division. As Ehud points out, should some other type have a "+" and "/" operator defined, typechecking will succeed, but possibly with undesirable results. (Does it make sense to compute the mean of an arbitrary algebraic field which happens to contain some subset of the integers as members?)

In this case, if the typechecker tells you that the type in question doesn't support + and /, it's not hard to figure out what is going on. Many type unification errors aren't as nice to decipher, however.

(A similar but nastier problem occurs with dynamically-typed languages--the program will happily run until an invalid operation is actually encountered, then you get #doesNotUnderstand or some equivalent. This might happen after the system is deployed; even if it happens during development, it's miles away from where the actual bug occurred).

Taken from GHCI just

Taken from GHCI just now:

*Main> "foo" + "bar"

:1:0:
    No instance for (Num [Char])
      arising from use of `+' at :1:0-12
    Possible fix: add an instance declaration for (Num [Char])
    In the expression: "foo" + "bar"
    In the definition of `it': it = "foo" + "bar"

That's Haskell-speak for "[Char] isn't a number". Other languages' mileage might vary, but it's certainly possible to do things this way. Having arbitrary fields using + and / would require rewriting the Prelude, and might carry similar risks - but then you can always supply an annotation.

Incidentally, there's a trick you can use in place of a complex annotation if you're going to need it a lot - you can apply a function (\x::SpecificType -> x), which can be bound to a nicer name. Given that it's just id with a more specific type, most compilers will inline it or otherwise optimise it out.

The second thing

Annotations're additional constraints, and I don't necessarily intend that - they can be a real PITA if you're transforming a pile of code, forcing you to do more work to correct them.

Presumably then the second thing you'd want out of an IDE would be to have it automatically weaken overly strong type annotations, if it could do so in a way so as to make incorrectly-typed code safe. That would seem to give you the best of both worlds, with type constraints as strong as you wish, and no need to ever manually enter or modify them. I wouldn't be surprised to see such technology available for Scala within the next 12 to 18 months.

I'd want control over the

I'd want control over the area it does it in, because annotations can also be there because of invariants that aren't structurally visible in the type system. But yeah, it's a nice thing to have.

Unfortunately, I don't work in Scala! This may be part of the issue - tools like Haddock often only generate docs for functions with annotations, for example.

Warning: Shameless plugs

Warning: Shameless plugs ahead.

If we had IDEs, one of the first things I'd want is an option to show inferred function and/or parameter types

The latest version of Cat ( http://code.google.com/p/cat-language/downloads/list ) has a code viewer command (#v) which is not an editor, but does give syntax coloring and displays inferred types (inferred types are a different color than explicit type annotations). You can see a screen shot of the code viewer at http://cdiggins.com/wp-content/uploads/2007/07/code_viewer.jpg but it unfortunately doesn`t show an inferred type.

and means of putting them into documentation without adding them to the source.

This functionality is also available in the latest Cat interpreter. Just hover your pointer over any hyper-linked definition. Or use #h from the interpreter.

Annotations're additional constraints, and I don't necessarily intend that - they can be a real PITA if you're transforming a pile of code, forcing you to do more work to correct them.

Agreed. At the same time, they are useful way of assuring that changes to other parts of a program don`t change the overall intent. Depends on what your doing I suppose. In general though I don`t think a language should force type annotations, but rather let the programmer make up their own mind whether to use them or not.

Me too

Purely a me too post. It is a nice set of slides.

Form Foils Function

Oh dear, 43 pages of Comic Sans!

Wow

I was somewhat fearful I was the only one desperately annoyed by the font. Of course, that was why I voted against "Virtual Light" for the Hugo...

Don't fight it.

I guess you guys weren't big fans of SPJ's 110-slide Tackling the Awkward Squad presentation. Comic Sans is apparently the official font of MSR PL researchers, which makes them well-positioned for the future of typography: according to bancomicsans.com, "Comic Sans usage has steadily increased since 1995 at a geometric rate and is well on its way to becoming the most universally used typeface surpassing even Helvetica and Times by approximately 2018. [...] the only typeface anyone will use by the year 2030 will indeed be Comic Sans."

To quote Luca...

..yucc gross. :)

Fontastic

Luca's a fine illustrator and a design junky, so font weirdness lies close to his path. E.g., the slides from his Quest talk (repo'd on slide 20) are in Dijkstra's handwriting, a font Luca made at BTL in the '80s.

Great deck not only because....

informative, but because he could laugh at himself. A disappearing, but always ingratiating, trait in a presenter.

Link rot

The talk's moved here.