Semantics

Running Probabilistic Programs Backwards

I saw this work presented at ESOP 2015 by Neil Toronto, and the talk was excellent (slides).

Running Probabilistic Programs Backwards
Neil Toronto, Jay McCarthy, David Van Horn
2015

Many probabilistic programming languages allow programs to be run under constraints in order to carry out Bayesian inference. Running programs under constraints could enable other uses such as rare event simulation and probabilistic verification---except that all such probabilistic languages are necessarily limited because they are defined or implemented in terms of an impoverished theory of probability. Measure-theoretic probability provides a more general foundation, but its generality makes finding computational content difficult.

We develop a measure-theoretic semantics for a first-order probabilistic language with recursion, which interprets programs as functions that compute preimages. Preimage functions are generally uncomputable, so we derive an abstract semantics. We implement the abstract semantics and use the implementation to carry out Bayesian inference, stochastic ray tracing (a rare event simulation), and probabilistic verification of floating-point error bounds.

(also on SciRate)

The introduction sells the practical side of the work a bit better than the abstract.

Stochastic ray tracing [30] is one such rare-event simulation task. As illus- trated in Fig. 1, to carry out stochastic ray tracing, a probabilistic program simulates a light source emitting a single photon in a random direction, which is reflected or absorbed when it hits a wall. The program outputs the photon’s path, which is constrained to pass through an aperture. Millions of paths that meet the constraint are sampled, then projected onto a simulated sensor array.

The program’s main loop is a recursive function with two arguments: path, the photon’s path so far as a list of points, and dir, the photon’s current direction.

simulate-photon path dir :=
  case (find-hit (fst path) dir) of
    absorb pt −→ (pt, path)
    reflect pt norm −→ simulate-photon (pt, path) (random-half-dir norm)

Running simulate-photon (pt, ()) dir, where pt is the light source’s location and dir is a random emission direction, generates a photon path. The fst of the path (the last collision point) is constrained to be in the aperture. The remainder of the program is simple vector math that computes ray-plane intersections.

In contrast, hand-coded stochastic ray tracers, written in general-purpose languages, are much more complex and divorced from the physical processes they simulate, because they must interleave the advanced Monte Carlo algorithms that ensure the aperture constraint is met.

Unfortunately, while many probabilistic programming languages support random real numbers, none are capable of running a probabilistic program like simulate-photon under constraints to carry out stochastic ray tracing. The reason is not lack of engineering or weak algorithms, but is theoretical at its core: they are all either defined or implemented using [density functions]. [...] Programs whose outputs are deterministic functions of random values and programs with recursion generally cannot denote density functions. The program simulate-photon exhibits both characteristics.

Measure-theoretic probability is a more powerful alternative to this naive probability theory based on probability mass and density functions. It not only subsumes naive probability theory, but is capable of defining any computable probability distribution, and many uncomputable distributions. But while even the earliest work [15] on probabilistic languages is measure-theoretic, the theory’s generality has historically made finding useful computational content difficult. We show that measure-theoretic probability can be made computational by

  1. Using measure-theoretic probability to define a compositional, denotational semantics that gives a valid denotation to every program.
  2. Deriving an abstract semantics, which allows computing answers to questions about probabilistic programs to arbitrary accuracy.
  3. Implementing the abstract semantics and efficiently solving problems.

In fact, our primary implementation, Dr. Bayes, produced Fig. 1b by running a probabilistic program like simulate-photon under an aperture constraint.

Cakes, Custard, and Category Theory

Eugenia Cheng's new popular coscience book is out, in the U.K. under the title Cakes, Custard and Category Theory: Easy recipes for understanding complex maths, and in the U.S. under the title How to Bake Pi: An Edible Exploration of the Mathematics of Mathematics:

Most people imagine maths is something like a slow cooker: very useful, but pretty limited in what it can do. Maths, though, isn't just a tool for solving a specific problem - and it's definitely not something to be afraid of. Whether you're a maths glutton or have forgotten how long division works (or never really knew in the first place), the chances are you've missed what really makes maths exciting. Calling on a baker's dozen of entertaining, puzzling examples and mathematically illuminating culinary analogies - including chocolate brownies, iterated Battenberg cakes, sandwich sandwiches, Yorkshire puddings and Möbius bagels - brilliant young academic and mathematical crusader Eugenia Cheng is here to tell us why we should all love maths.

From simple numeracy to category theory ('the mathematics of mathematics'), Cheng takes us through the joys of the mathematical world. Packed with recipes, puzzles to surprise and delight even the innumerate, Cake, Custard & Category Theory will whet the appetite of maths whizzes and arithmophobes alike. (Not to mention aspiring cooks: did you know you can use that slow cooker to make clotted cream?) This is maths at its absolute tastiest.

Cheng, one of the Catsters, gives a guided tour of mathematical thinking and research activities, and through the core philosophy underlying category theory. This is the kind of book you can give to your grandma and grandpa so they can boast to their friends what her grandchildren are doing (and bake you a nice dessert when you come and visit :) ). A pleasant weekend reading.

Conservation laws for free!

In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:

Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.

In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.

Seemingly impossible programs

In case this one went under the radar, at POPL'12, Martín Escardó gave a tutorial on seemingly impossible functional programs:

Programming language semantics is typically applied to
prove compiler correctness and allow (manual or automatic) program
verification. Certain kinds of semantics can also be applied to
discover programs that one wouldn't have otherwise thought of. This is
the case, in particular, for semantics that incorporate topological
ingredients (limits, continuity, openness, compactness). For example,
it turns out that some function types (X -> Y) with X infinite (but
compact) do have decidable equality, contradicting perhaps popular
belief, but certainly not (higher-type) computability theory. More
generally, one can often check infinitely many cases in finite time.

I will show you such programs, run them fast in surprising instances,
and introduce the theory behind their derivation and working. In
particular, I will study a single (very high type) program that (i)
optimally plays sequential games of unbounded length, (ii) implements
the Tychonoff Theorem from topology (and builds finite-time search
functions for infinite sets), (iii) realizes the double-negation shift
from proof theory (and allows us to extract programs from classical
proofs that use the axiom of countable choice). There will be several
examples in the languages Haskell and Agda.

A shorter version (coded in Haskell) appears in Andrej Bauer's blog.

sml-family.org

In his blog, Bob Harper, in joint effort with Dave MacQueen and Lars Bergstrom, announces the launch of sml-family.org:

The Standard ML Family project provides a home for online versions of various formal definitions of Standard ML, including the "Definition of Standard ML, Revised" (Standard ML 97). The site also supports coordination between different implementations of the Standard ML (SML) programming language by maintaining common resources such as the documentation for the Standard ML Basis Library and standard test suites. The goal is to increase compatibility and resource sharing between Standard ML implementations.

The site includes a history section devoted to the history of ML, and of Standard ML in particular. This section will contain a collection of original source documents relating to the design of the language.

Inferring algebraic effects

Logical methods in computer science just published Matija Pretnar's latest take on algebraic effects and handlers:

We present a complete polymorphic effect inference algorithm for an ML-style language with handlers of not only exceptions, but of any other algebraic effect such as input & output, mutable references and many others. Our main aim is to offer the programmer a useful insight into the effectful behaviour of programs. Handlers help here by cutting down possible effects and the resulting lengthy output that often plagues precise effect systems. Additionally, we present a set of methods that further simplify the displayed types, some even by deliberately hiding inferred information from the programmer.

Pretnar and Bauer's Eff has made previous appearances here on LtU. Apart from the new fangled polymorphic effect system, this paper also contains an Eff tutorial.

Luca Cardelli Festschrift

Earlier this week Microsoft Research Cambridge organised a Festschrift for Luca Cardelli. The preface from the book:

Luca Cardelli has made exceptional contributions to the world of programming
languages and beyond. Throughout his career, he has re-invented himself every
decade or so, while continuing to make true innovations. His achievements span
many areas: software; language design, including experimental languages;
programming language foundations; and the interaction of programming languages
and biology. These achievements form the basis of his lasting scientific leadership
and his wide impact.
...
Luca is always asking "what is new", and is always looking to
the future. Therefore, we have asked authors to produce short pieces that would
indicate where they are today and where they are going. Some of the resulting
pieces are short scientific papers, or abridged versions of longer papers; others are
less technical, with thoughts on the past and ideas for the future. We hope that
they will all interest Luca.

Hopefully the videos will be posted soon.

Dependently-Typed Metaprogramming (in Agda)

Conor McBride gave an 8-lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory:

Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedently-typed programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the “pattern functors” which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of “container”. We shall expose the double-life containers lead as “interaction structures” describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductive-recursive definitions, and we use that power to build universes of dependent types.

The lecture notes, code, and video captures are available online.

As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging.

Milner Symposium 2012

The Milner Symposium 2012 was held in Edinburgh this April in memory of the late Robin Milner.

The Milner Symposium is a celebration of the life and work of one of the world's greatest computer scientists, Robin Milner. The symposium will feature leading researchers whose work is inspired by Robin Milner.

The programme consisted of academic talks by colleagues and past students. The talks and slides are available online.

I particularly liked the interleaving of the personal and human narrative underlying the scientific journey. A particularly good example is Joachim Parrow's talk on the origins of the pi calculus. Of particular interest to LtU members is the panel on the future of functional programming languages, consisting of Phil Wadler, Xavier Leroy, David MacQueen, Martin Odersky, Simon Peyton-Jones, and Don Syme.

Mechanized λ<sub>JS</sub>

Mechanized λJS
The Brown PLT Blog, 2012-06-04

In an earlier post, we introduced λJS, our operational semantics for JavaScript. Unlike many other operational semantics, λJS is no toy, but strives to correctly model JavaScript's messy details. To validate these claims, we test λJS with randomly generated tests and with portions of the Mozilla JavaScript test suite.

Testing is not enough. Despite our work, other researchers found a missing case in λJS. Today, we're introducing Mechanized λJS, which comes with a machine-checked proof of correctness, using the Coq proof assistant.

More work on mechanizing the actual, implemented semantics of a real language, rather than a toy.

XML feed