How OCaml type checker works -- or what polymorphism and garbage collection have in common

How OCaml type checker works -- or what polymorphism and garbage collection have in common

There is more to Hindley-Milner type inference than the Algorithm W. In 1988, Didier Rémy was looking to speed up the type inference in Caml and discovered an elegant method of type generalization. Not only it is fast, avoiding the scan of the type environment. It smoothly extends to catching of locally-declared types about to escape, to type-checking of universals and existentials, and to implementing MLF.

Alas, both the algorithm and its implementation in the OCaml type checker are little known and little documented. This page is to explain and popularize Rémy's algorithm, and to decipher a part of the OCaml type checker. The page also aims to preserve the history of Rémy's algorithm.

The attraction of the algorithm is its insight into type generalization as dependency tracking -- the same sort of tracking used in automated memory management such as regions and generational garbage collection. Generalization can be viewed as finding dominators in the type-annotated abstract syntax tree with edges for shared types. Fluet and Morrisett's type system for regions and MetaOCaml environment classifiers use the generalization of a type variable as a criterion of region containment. Uncannily, Rémy's algorithm views the region containment as a test if a type variable is generalizable.

As usual with Oleg, there's a lot going on here. Personally, I see parallels with "lambda with letrec" and "call-by-push-value," although making the connection with the latter takes some squinting through some of Levy's work other than his CBPV thesis. Study this to understand OCaml type inference and/or MLF, or for insights into region typing, or, as the title suggests, for suggestive analogies between polymorphism and garbage collection.

Socio-PLT: Principles for Programming Language Adoption

In their survey paper and their website, Leo Meyerovich and Ari Rabkin take Jared Diamond approach to explaining Programming Language adoption.

Why do some programming languages fail and others succeed? What does the answer tell us about programming language design, implementation, and principles? To help answer these and other questions, we argue for examining the sociological groundings of programming language theory: socio-PLT.
Researchers in the social sciences have studied adoption in many contexts. We show how their findings are applicable to programming language design.

There are also videos of talks available from Splash 2012 and Google Tech Talks.

See also previous discussions.

Simple Generators v. Lazy Evaluation

Oleg Kiselyov, Simon Peyton-Jones and Amr Sabry: Simple Generators:

Incremental stream processing, pervasive in practice, makes the best case for lazy evaluation. Lazy evaluation promotes modularity, letting us glue together separately developed stream producers, consumers and transformers. Lazy list processing has become a cardinal feature of Haskell. It also brings the worst in lazy evaluation: its incompatibility with effects and unpredictable and often extraordinary use of memory. Much of the Haskell programming lore are the ways to get around lazy evaluation.

We propose a programming style for incremental stream processing based on typed simple generators. It promotes modularity and decoupling of producers and consumers just like lazy evaluation. Simple generators, however, expose the implicit suspension and resumption inherent in lazy evaluation as computational effects, and hence are robust in the presence of other effects. Simple generators let us accurately reason about memory consumption and latency. The remarkable implementation simplicity and efficiency of simple generators strongly motivates investigating and pushing the limits of their expressiveness.

To substantiate our claims we give a new solution to the notorious pretty-printing problem. Like earlier solutions, it is linear, backtracking-free and with bounded latency. It is also modular, structured as a cascade of separately developed stream transducers, which makes it simpler to write, test and to precisely analyze latency, time and space consumption. It is compatible with effects including IO, letting us read the source document from a file, and format it as we read.

This is fascinating work that shows how to gain the benefits of lazy evaluation - decoupling of producers, transformers, and consumers of data, and producing only as much data as needed - in a strict, effectful setting that works well with resources that need to be disposed of once computation is done, e.g. file handles.

The basic idea is that of Common Lisp signal handling: use a hierarchical, dynamically-scoped chain of handler procedures, which get called - on the stack, without unwinding it - to parameterize code. In this case, the producer code (which e.g. reads a file character by character) is the parameterized code: every time data (a character) is produced, it calls the dynamically innermost handler procedure with the data (it yields the data to the handler). This handler is the data consumer (it could e.g. print the received character to the console). Through dynamic scoping, each handler may also have a super-handler, to which it may yield data. In this way, data flows containing multiple transformers can be composed.

I especially like the OCaml version of the code, which is just a page of code, implementing a dynamically-scoped chain of handlers. After that we can already write map and fold in this framework (fold using a loop and a state cell, notably.) There's more sample code.

This also ties in with mainstream yield.

Photoshop 1.0 Source Code

Some people are amazed that it's in Pascal... HN discussion is here.

Cognitive Architectures: A Way Forward for the Psychology of Programming

Michael Hansen presents the ACT-R cognitive architecture, a simulation framework for psychological models, showing how it could be used to measure the impact of various programming paradigms.

Video

This describes an interesting attempt to formalize comparison of programming languages from point of view of cognitive psychology. There are also some some interesting references from the presentation.

Ada 2012 Language Standard Approved by ISO

December 18, 2012 - The Ada Resource Association (ARA) and Ada-Europe today announced the approval and publication of the latest version of the Ada programming language by the Geneva-based International Organization for Standardization (ISO). The language revision, known as Ada 2012, was under the auspices of ISO/IEC JTC1/SC22/WG9 and was conducted by the Ada Rapporteur Group (ARG) subunit of WG9, with sponsorship in part from the ARA and Ada-Europe. The formal approval of the standard was issued on November 20 by ISO/IEC JTC 1, and the standard was published on December 15.

I am glad to say that the major area of improvement is the ability to specify contracts, something I was urging the Ada community to do back in 2002. The new features include the ability to specify preconditions and postconditions for subprograms, and invariants for private types. Another important area that received attention in this iteration is multi-core programming.

So if you are serious about mission critical software, head on to the web site and see what you have been missing.

What will programming look like in 2020?

Things are too quiet lately, so maybe its time to start a fun thread. I used this question as a lead in to a presentation on live programming I recently gave at a working group, but I thought it would be fun to gather a bunch of predictions if anyone is willing.

What will programming look like in 2020? Keep in mind that programming in 2012 mostly resembles programming in 2004, so could we even expect any significant changes 8 years from now in the programmer experience? Consider the entire programming stack of language, environment, process, libraries, search technology, and so on.

Eighth draft of Scheme R7RS-small published

The eighth draft of the Scheme R7RS-small standard has just been published. This draft contains editorial corrections to the seventh draft, as well as changing the definition of eqv? on inexact numbers to the style of R6RS operational equivalence.

This draft will be used for the ratification vote by the Scheme community, though we will continue to correct mistakes before the final draft. Details on the ratification vote will be published at scheme-reports.org and on the scheme-reports@scheme-reports.org mailing list.

A module system for the C family

Doug Gregor of Apple presented a talk on "A module system for the C family" at the 2012 LLVM Developers' Meeting.

The C preprocessor has long been a source of problems for programmers and tools alike. Programmers must contend with widespread macro pollution and include-ordering problems due to ill-behaved headers. Developers habitually employ various preprocessor workarounds, such as LONG_MACRO_PREFIXES, include guards, and the occasional #undef of a library macro to mitigate these problems. Tools, on the other hand, must cope with the inherent scalability problems associated with parsing the same headers repeatedly, because each different preprocessing context could effect how a header is interpreted---even though the programmer rarely wants it. Modules seeks to solve this problem by isolating the interface of a particular library and compiling it (once) into an efficient, serialized representation that can be efficiently imported whenever that library is used, improving both the programmer's experience and the scalability of the compilation process.

Slide[PDF] and Video[MP4]

Slides and videos from other presentations from the meeting are also available.