A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation

The project Incremental λ-Calculus is just starting (compared to more mature approaches like self-adjusting computation), with a first publication last year.

A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014

If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.

We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.

We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.

I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value v do not live in some common type diff(T), but rather in a value-dependent type diff(v). Intuitively, the empty list and a non-empty list have fairly different types of possible changes. This makes change-merging and change-producing operations total, and allow to give them a nice operational theory. Good design, through types.

(The program transformation seems related to the program-level parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Toy language

There is a Scala DSL built upon these ideas and equipped with well-chosen implementations of primitive types, that is used for a performance evaluation. But the formal system handled for now is very small (simply-typed lambda-calculus without sums), inductive types (datatypes) are not supported yet -- the authors are working on this.

Nice paper. Maybe I didn't

Nice paper. Maybe I didn't read it carefully enough, but this framework does not seem general enough to encompass self-adjusting computation and other similar approaches? The change update only has access to the original input and output and the delta input, whereas the key to incrementalization is usually to have access to all the intermediate results of a computation. Perhaps a type like this would work:

IncrementalFunction A B = dA -> (dB, IncrementalFunction A B)

So you start with a function of type A -> B. Then some machinery turns it into a function of type A -> (B, IncrementalFunction A B). In addition to returning the original output it also returns a function that allows you to give a delta input to obtain a delta output, and a new incremental function that has its 'origin' around the new input (old input + delta input).

Self-maintainable derivatives vs not

Author here, thanks for the feedback. (Why can't one subscribe to posts?)

You're right; your point is addressed in Sec. 4.3, "Self-maintainability", but I can see why you wonder, since we keep talking about not needing inputs. However, if derivatives of intermediate computations ignore their base inputs, you don't need intermediate results. We should probably clarify the discussion.

So, I agree this is quite annoying and restrictive, and our approach seems therefore currently closer to adaptive computation (the precursor to SAC); however, remembering intermediate results works well as a separate problem.

The equation you describe doesn't appear in the paper, but I agree it fits (and I found an isomorphic one in my notes). Our plans on how to implement that type are in Sec. 5.2.2; there's been quite some progress, so probably I should stop procrastinating on them.

Bob Atkey on the connection to parametricity

gasche wrote:

(The program transformation seems related to the program-level parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)

And Bob Atkey answered "yes!" on his blog.