The Next Stage of Staging, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:
This position paper argues for type-level metaprogramming, wherein types and type declarations are generated in addition to program terms. Term-level metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover ML-style module generation and show the possibilities they open up for type specialization and overhead-free parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a long-term program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets next-stage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with type-declaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.
A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true first-class modules of 1ML, perhaps there's a clearer way forward.
The project Incremental λ-Calculus is just starting (compared to more mature approaches like self-adjusting computation), with a first publication last year.
A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014
If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.
We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.
We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.
I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value
v do not live in some common type
diff(T), but rather in a value-dependent type
diff(v). Intuitively, the empty list and a non-empty list have fairly different types of possible changes. This makes change-merging and change-producing operations total, and allow to give them a nice operational theory. Good design, through types.
(The program transformation seems related to the program-level parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)
Presented annually to the author of the outstanding doctoral dissertation in the area of Programming Languages. The award includes a prize of $1,000. The winner can choose to receive the award at ICFP, OOPSLA, POPL, or PLDI.
I guess it is fairly obvious why professors should propose their students (the deadline is January 4th 2015). Newly minted PhD should, for similar reasons, make sure their professors are reminded of these reasons. I can tell you that the competition is going to be tough this year; but hey, you didn't go into programming language theory thinking it is going to be easy, did you?
Zélus : A Synchronous Language with ODEs
Timothy Bourke, Marc Pouzet
Zélus is a new programming
language for modeling systems that mix discrete logical time and
continuous time behaviors. From a user's perspective, its main
originality is to extend an existing Lustre-like
synchronous language with Ordinary Differential Equations (ODEs). The
extension is conservative: any synchronous program expressed as
data-flow equations and hierarchical automata can be composed
arbitrarily with ODEs in the same source code.
A dedicated type system and causality analysis ensure that all
discrete changes are aligned with zero-crossing events so that no side
effects or discontinuities occur during integration. Programs are
statically scheduled and translated into sequential code that, by
construction, runs in bounded time and space. Compilation is effected
by source-to-source translation into a small synchronous subset which
is processed by a standard synchronous compiler architecture. The
resultant code is paired with an off-the-shelf numeric solver.
We show that it is possible to build a modeler for explicit hybrid
systems à la Simulink/Stateflow on top of
an existing synchronous language, using it both as a semantic basis
and as a target for code generation.
Synchronous programming languages (à la Lucid
Synchrone) are language designs for reactive systems with discrete
time. Zélus extends them gracefully to hybrid discrete/continuous
systems, to interact with the physical world, or simulate it -- while
preserving their strong semantic qualities.
The paper is short (6 pages) and centered around examples rather than
the theory -- I enjoyed it. Not being familiar with the domain, I was
unsure what the "zero-crossings" mentioned in the introductions are,
but there is a good explanation further down in the paper:
The standard way to detect events in a numeric solver is
via zero-crossings where a solver monitors expressions for changes in
sign and then, if they are detected, searches for a more precise
instant of crossing.
The Zélus website has a 'publications' page with
more advanced material, and an 'examples' page with
At the same time, Flow is a gradual type system. Any parts of your program that are dynamic in nature can easily bypass the type checker, so you can mix statically typed code with dynamic code.
Flow also supports a highly expressive type language. Flow types can express much more fine-grained distinctions than traditional type systems. For example, Flow helps you catch errors involving null, unlike most type systems.
Read more here.
Here's the announcement from Facebook.
Post by Joe Armstrong of Erlang fame. Leader:
Why do we need modules at all? This is a brain-dump-stream-of-consciousness-thing. I've been thinking about this for a while. I'm proposing a slightly different way of programming here. The basic idea is:
- do away with modules
- all functions have unique distinct names
- all functions have (lots of) meta data
- all functions go into a global (searchable) Key-value database
- we need letrec
- contribution to open source can be as simple as contributing a single function
- there are no "open source projects" - only "the open source Key-Value database of all functions"
- Content is peer reviewed
Why does Erlang have modules? There's a good an bad side to modules. Good: Provides a unit of compilation, a unit of code distribution. unit of code replacement. Bad: It's very difficult to decide which module to put an individual function in. Break encapsulation (see later).
In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.
In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:
Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise thought of. This is
the case, in particular, for semantics that incorporate topological
ingredients (limits, continuity, openness, compactness). For example,
it turns out that some function types (X -> Y) with X infinite (but
compact) do have decidable equality, contradicting perhaps popular
belief, but certainly not (higher-type) computability theory. More
generally, one can often check infinitely many cases in finite time.
I will show you such programs, run them fast in surprising instances,
and introduce the theory behind their derivation and working. In
particular, I will study a single (very high type) program that (i)
optimally plays sequential games of unbounded length, (ii) implements
the Tychonoff Theorem from topology (and builds finite-time search
functions for infinite sets), (iii) realizes the double-negation shift
from proof theory (and allows us to extract programs from classical
proofs that use the axiom of countable choice). There will be several
examples in the languages Haskell and Agda.
A shorter version (coded in Haskell) appears in Andrej Bauer's blog.
Gordon Plotkin is renowned for his groundbreaking contributions to programming language semantics, which have helped to shape the landscape of theoretical computer science, and which have im-pacted upon the design of programming languages and their verification technologies. The in-fluence of his pioneering work on logical frameworks pervades modern proof technologies. In addition, he has made outstanding contributions in machine learning, automated theorem prov-ing, and computer-assisted reasoning. He is still active in research at the topmost level, with his current activities placing him at the forefront of fields as diverse as programming semantics, applied logic, and systems biology.
Well deserved, of course. Congrats!
Announcing the 2015 edition of the OBT workshop, to be co-located with POPL 2015, in Mumbai, India. Two-page paper submissions are due November 7, 2014.
From the web page (http://www.cs.rice.edu/~sc40/obt15/):
Programming language researchers have the principles, tools, algorithms and abstractions to solve all kinds of problems, in all areas of computer science. However, identifying and evaluating new problems, particularly those that lie outside the typical core PL problems we all know and love, can be a significant challenge. This workshop's goal is to identify and discuss problems that do not often show up in our top conferences, but where programming language research can make a substantial impact. We hope fora like this will increase the diversity of problems that are studied by PL researchers and thus increase our community's impact on the world.
While many workshops associated with POPL have become more like mini-conferences themselves, this is an anti-goal for OBT. The workshop will be informal and structured to encourage discussion. We are at least as interested in problems as in solutions.