Lambda the Ultimate - Lambda Calculus
http://lambda-the-ultimate.org/taxonomy/term/20/0
LC and variations.enImplementing Algebraic Effects in C
http://lambda-the-ultimate.org/node/5457
<p ><a href="https://www.microsoft.com/en-us/research/publication/implementing-algebraic-effects-c/">Implementing Algebraic Effects in C</a> by Daan Leijen:</p>
<blockquote ><p >We describe a full implementation of algebraic effects and handlers as a library in standard and portable C99, where effect operations can be used just like regular C functions. We use a formal operational semantics to guide the C implementation at every step where an evaluation context corresponds directly to a particular C execution context. Finally we show a novel extension to the formal semantics to describe optimized tail resumptions and prove that the extension is sound. This gives two orders of magnitude improvement to the performance of tail resumptive operations (up to about 150 million operations per second on a Core i7@2.6GHz)</p></blockquote>
<p >Another great paper by Daan Leijen, this time on a C library with immediate practical applications at Microsoft. The applicability is much wider though, since it's an ordinary C library for defining and using arbitrary algebraic effects. It looks pretty usable and is faster and more general than most of the C coroutine libraries that already exist.</p>
<p >It's a nice addition to your toolbox for creating language runtimes in C, particularly since it provides a unified, structured way of creating and handling a variety of sophisticated language behaviours, like async/await, in ordinary C with good performance. There has been considerable discussion here of C and low-level languages with green threads, coroutines and so on, so hopefully others will find this useful!</p>EffectsImplementationLambda CalculusSemanticsThu, 27 Jul 2017 13:50:17 +0000 Imperative Functional Programs that Explain their Work
http://lambda-the-ultimate.org/node/5436
<p >
<a href="https://arxiv.org/pdf/1705.07678">Imperative Functional Programs that Explain their Work</a>
<br >
Wilmer Ricciotti, Jan Stolarek, Roly Perera, James Cheney
<br >
submitted <a href="https://arxiv.org/abs/1705.07678">on arXiv</a> on 22 May 2017
<br >
<blockquote >
Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al., where dynamic slicing was defined for pure higher-order functional programs as a Galois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.
</blockquote>
</p>
<p >
Dynamic slicing answers the following question: if I only care about these specific part of the trace of my program execution, what are the only parts of the source program that I need to look at? For example, if the output of the program is a pair, can you show me that parts of the source that impacted the computation of the first component? If a part of the code is not involved in the trace, or not in the part of the trace that you care about, it is removed from the partial code returned by slicing.
</p>
<p >
What I like about this work is that there is a very nice algebraic characterization of what slicing is (the Galois connection), that guides you in how you implement your slicing algorithm, and also serves as a specification to convince yourself that it is correct -- and "optimal", it actually removes all the program parts that are irrelevant. This characterization already existed in previous work (<a href="https://www.cs.bham.ac.uk/~pbl/papers/functionalexplain.pdf">Functional Programs that Explain Their Work</a>, Roly Perera, Umut Acar, James cheney, Paul Blain Levy, 2012), but it was done in a purely functional setting. It wasn't clear (to me) whether the nice formulation was restricted to this nice language, or whether the technique itself would scale to a less structured language. This paper extends it to effectful ML (mutable references and exceptions), and there it is much easier to see that it remains elegant and yet can scale to typical effectful programming languages.
</p>
<p >
The key to the algebraic characterization is to recognize two order structures, one on source program fragment, and the other on traces. Program fragments are programs with hole, and a fragment is smaller than another if it has more holes. You can think of the hole as "I don't know -- or I don't care -- what the program does in this part", so the order is "being more or less defined". Traces are also partial traces with holes, where the holes means "I don't know -- or I don't care -- what happens in this part of the trace". The double "don't know" and "don't care" nature of the ordering is essential: the Galois connection specifies a slicer (that goes from the part of a trace you care about to the parts of a program you should care about) by relating it to an evaluator (that goes from the part of the program you know about to the parts of the trace you can know about). This specification is simple because we are all familiar with what evaluators are.
</p>Lambda CalculusThu, 25 May 2017 17:58:53 +0000Type Systems as Macros
http://lambda-the-ultimate.org/node/5426
<p ><a href="http://www.ccs.neu.edu/home/stchang/pubs/ckg-popl2017.pdf">Type Systems as Macros</a>, by Stephen Chang, Alex Knauth, Ben Greenman:</p>
<blockquote ><p >We present TURNSTILE, a metalanguage for creating typed embedded languages. To implement the type system, programmers write type checking rules resembling traditional judgment syntax. To implement the semantics, they incorporate elaborations into these rules. TURNSTILE critically depends on the idea of linguistic reuse. It exploits a <em >macro system</em> in a novel way to simultaneously type check and rewrite a surface program into a target language. Reusing a macro system also yields modular implementations whose rules may be mixed and matched to create other languages. Combined with typical compiler and runtime reuse, TURNSTILE produces performant typed embedded languages with little effort.</p></blockquote>
<p >This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details <a href="http://www.ccs.neu.edu/home/stchang/popl2017/">available here</a>, and <a href="http://blog.racket-lang.org/2017/04/type-tailoring.html">here's a blog post</a> where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to <a href="http://lambda-the-ultimate.org/node/1339">Active Libraries</a>.</p>
<p >Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F.</p>
<p >This work may also conceptually dovetail with <a href="http://lambda-the-ultimate.org/node/5424">another thread discussing fexprs and compilation</a>.</p>DSLFunctionalLambda CalculusMeta-ProgrammingType TheoryWed, 19 Apr 2017 23:38:56 +0000The complexity of abstract machines
http://lambda-the-ultimate.org/node/5406
<p >I previously wrote about a brand of research by Guy Blelloch on the <a href="http://lambda-the-ultimate.org/node/5021">Cost semantics for functional languages</a>, which let us make precise claim about the complexity of functional programs without leaving their usual and comfortable programming models (beta-reduction).</p>
<p >While the complexity behavior of weak reduction strategies, such as call-by-value and call-by-name, is by now relatively well-understood, the lambda-calculus has a much richer range of reduction strategies, in particular those that can reduce under lambda-abstractions, whose complexity behavior is sensibly more subtle and was, until recently, not very well understood. (This has become a practical concern since the rise in usage of proof assistants that must implement reduction under binders and are very concerned about the complexity of their reduction strategy, which consumes a lot of time during type/proof-checking.)</p>
<p >Beniamino Accatoli, who has been co-authoring a lot of work in that area, recently published on arXiv a new paper that has survey quality, and is a good introduction to this area of work and other pointers from the literature. </p>
<blockquote ><p ><a href="https://arxiv.org/abs/1701.00649">The Complexity of Abstract Machines</a></p>
<p >Beniamino Accatoli, 2017</p>
<p >The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet -- quite mysteriously -- the efficiency of these machines with respect to the strategy that they implement has almost never been studied.</p>
<p >
This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name) strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before. </p></blockquote>Lambda CalculusThu, 12 Jan 2017 01:09:41 +0000Philip Wadler: Category Theory for the Working Hacker
http://lambda-the-ultimate.org/node/5366
<p ><a href="https://www.infoq.com/presentations/category-theory-propositions-principle">Nothing you don't already know</a>, if you are inteo this sort of thing (and many if not most LtU-ers are), but a quick way to get the basic idea if you are not. Wadler has papers that explain Curry-Howard better, and the category theory content here is very basic -- but it's an easy listen that will give you the fundamental points if you still wonder what this category thing is all about. </p>
<p >To make this a bit more fun for those already in the know: what is totally missing from the talk (understandable given time constraints) is why this should interest the "working hacker". So how about pointing out a few cool uses/ideas that discerning hackers will appreciate? Go for it!</p>Category TheoryLambda CalculusSemanticsSun, 07 Aug 2016 17:26:26 +0000Fully Abstract Compilation via Universal Embedding
http://lambda-the-ultimate.org/node/5364
<p ><a href="https://www.williamjbowman.com/resources/fabcc-paper.pdf">Fully Abstract Compilation via Universal Embedding</a> by Max S. New, William J. Bowman, and Amal Ahmed:</p>
<blockquote ><p >A <em >fully abstract</em> compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: target-language attackers can make no more observations of a compiled component than a source-language attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be back-translated to a behaviorally equivalent source context.</p>
<p >We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translation—specifically, closure conversion of simply typed λ-calculus with recursive types—uses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than source-level attackers. We present a new back-translation technique based on a deep embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the strongly-typed source. This technique allows back-translating non-terminating programs, target features that are untypeable in the source, and well-bracketed effects.</p></blockquote>
<p >Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures <a href="http://lambda-the-ultimate.org/node/3830">here</a> and <a href="http://lambda-the-ultimate.org/node/1588">here</a>. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc.</p>Lambda CalculusSemanticsTheoryType TheoryWed, 27 Jul 2016 15:57:02 +0000Progress on Gradual Typing
http://lambda-the-ultimate.org/node/5292
<p >Among many interesting works,
the <a href="https://github.com/gasche/popl2016-papers">POPL 2016
papers</a> have a bunch of nice articles
on <a href="https://en.wikipedia.org/wiki/Gradual_typing">Gradual
Typing</a>.</p>
<p ><strong >The Gradualizer: a methodology and algorithm for generating gradual type systems</strong></p>
<p >
<a href="https://dl.dropboxusercontent.com/u/10275252/gradualizer-popl16.pdf">
The Gradualizer: a methodology and algorithm for generating gradual type systems
</a><br >
by Matteo Cimini, Jeremy Siek<br >
2016
</p>
<blockquote >
<p >Many languages are beginning to integrate dynamic and static
typing. Siek and Taha offered gradual typing as an approach to this
integration that provides the benefits of a coherent and full-span
migration between the two disciplines. However, the literature lacks
a general methodology for designing gradually typed languages. Our
first contribution is to provide such a methodology insofar as the
static aspects of gradual typing are concerned: deriving the gradual
type system and the compilation to the cast calculus.</p>
<p >Based on this methodology, we present the Gradualizer, an algorithm
that generates a gradual type system from a normal type system
(expressed as a logic program) and generates a compiler to the cast
calculus. Our algorithm handles a large class of type systems and
generates systems that are correct with respect to the formal criteria
of gradual typing. We also report on an implementation of the
Gradualizer that takes type systems expressed in lambda-prolog and
outputs their gradually typed version (and compiler to the
cast calculus) in lambda-prolog.</p>
</blockquote>
<p >One can think of the Gradualizer as a kind of meta-programming
algorithm that takes a type system in input, and returns a gradual
version of this type system as output. I find it interesting that
these type systems are encoded
as <a href="https://en.wikipedia.org/wiki/%CE%9BProlog">lambda-prolog
programs</a> (a notable use-case for functional
logic programming). This is a very nice way to bridge the gap between
describing a transformation that is "in principle" mechanizable and
a <a href="https://github.com/mcimini/Gradualizer">running
implementation</a>.</p>
<p >An interesting phenomenon happening once you want to implement
these ideas in practice is that it forced the authors to define
precisely many intuitions everyone has when reading the description of
a type system as a system of inference rules. These intuitions are,
broadly, about the relation between the static and the dynamic
semantics of a system, the flow of typing information, and the flow of
values; two occurrences of the same type in a typing rule may play very
different roles, some of which are discussed in this article.</p>
<p ><br ></p>
<p ><strong >Is Sound Gradual Typing Dead?</strong></p>
<p >
<a href="http://www.ccs.neu.edu/racket/pubs/popl16-tfgnvf.pdf">
Is Sound Gradual Typing Dead?
</a><br >
by Asumu Takikawa, Daniel Feltey, Ben Greenman, Max New, Jan Vitek, Matthias Felleisen<br >
2016
</p>
<blockquote >
<p >
Programmers have come to embrace dynamically typed languages for
prototyping and delivering large and complex systems. When it comes to
maintaining and evolving these systems, the lack of explicit static
typing becomes a bottleneck. In response, researchers have explored
the idea of gradually typed programming languages which allow the
post-hoc addition of type annotations to software written in one of
these “untyped” languages. Some of these new hybrid languages insert
run-time checks at the boundary between typed and untyped code to
establish type soundness for the overall system. With sound gradual
typing programmers can rely on the language implementation to provide
meaningful error messages when “untyped” code misbehaves.
</p>
<p >
While most research on sound gradual typing has remained theoretical,
the few emerging implementations incur performance overheads due to
these checks. Indeed, none of the publications on this topic come with
a comprehensive performance evaluation; a few report disastrous
numbers on toy benchmarks. In response, this paper proposes
a methodology for evaluating the performance of gradually typed
programming languages. The key is to explore the performance impact of
adding type annotations to different parts of a software system. The
methodology takes takes the idea of a gradual conversion from untyped
to typed seriously and calls for measuring the performance of all
possible conversions of a given untyped benchmark. Finally the paper
validates the proposed methodology using Typed Racket, a mature
implementation of sound gradual typing, and a suite of real-world
programs of various sizes and complexities. Based on the results
obtained in this study, the paper concludes that, given the state of
current implementation technologies, sound gradual typing is
dead. Conversely, it raises the question of how implementations could
reduce the overheads associated with ensuring soundness and how tools
could be used to steer programmers clear from pathological cases.
</p>
</blockquote>
<p >
In a fully dynamic system, typing checks are often superficial
(only the existence of a particular field is tested) and done lazily
(the check is made when the field is accessed). Gradual typing changes
this, as typing assumptions can be made earlier than the value is
used, and range over parts of the program that are not exercised in
all execution branches. This has the potentially counter-intuitive
consequence that the overhead of runtime checks may be sensibly larger
than for fully-dynamic systems. This paper presents a methodology to
evaluate the "annotation space" of a Typed Racket program, studying
how the possible choices of which parts to annotate affect overall
performance.
</p>
<p >
Many would find this article surprisingly grounded in reality for
a POPL paper. It puts the spotlight on a question that is too rarely
discussed, and could be presented as a strong illustration of why it
matters to be serious about implementing our research.
</p>
<p ><br ></p>
<p ><strong >Abstracting Gradual Typing</strong></p>
<p >
<a href="http://pleiad.dcc.uchile.cl/papers/2016/garciaAl-popl2016.pdf">
Abstracting Gradual Typing
</a><br >
by Ronald Garcia, Alison M. Clark, Éric Tanter<br >
2016
</p>
<blockquote >
<p >Language researchers and designers have extended a wide variety of
type systems to support gradual typing, which enables languages to
seamlessly combine dynamic and static checking. These efforts
consistently demonstrate that designing a satisfactory gradual
counterpart to a static type system is challenging, and this challenge
only increases with the sophistication of the type system. Gradual
type system designers need more formal tools to help them
conceptualize, structure, and evaluate their designs.</p>
<p >In this paper, we propose a new formal foundation for gradual
typing, drawing on principles from abstract interpretation to give
gradual types a semantics in terms of pre-existing static
types. Abstracting Gradual Typing (AGT for short) yields a formal
account of consistency—one of the cornerstones of the gradual typing
approach—that subsumes existing notions of consistency, which were
developed through intuition and ad hoc reasoning.</p>
<p >Given a syntax-directed static typing judgment, the AGT approach
induces a corresponding gradual typing judgment. Then the
subject-reduction proof for the underlying static discipline induces
a dynamic semantics for gradual programs defined over source-language
typing derivations. The AGT approach does not recourse to an
externally justified cast calculus: instead, run-time checks naturally
arise by deducing evidence for consistent judgments during
proof-reduction.</p>
<p >To illustrate our approach, we develop novel gradually-typed
counterparts for two languages: one with record subtyping and one with
information-flow security types. Gradual languages designed with the
AGT approach satisfy, by construction, the refined criteria for
gradual typing set forth by Siek and colleagues.
</p>
</blockquote>
<p >At first sight this description seems to overlap with the
Gradualizer work cited above, but in fact the two approaches are
highly complementary. The Abstract Gradual Typing effort seems
mechanizable, but it is far from being implementable in practice as
done in the Gradualizer work. It remains a translation to be done on
paper by skilled expert, although, as standard in abstract
interpretation works, many aspects are deeply computational --
computing the best abstractions. On the other hand, it is extremely
powerful to guide system design, as it provides not only a static
semantics for a gradual system, but also a model dynamic
semantics.</p>
<p >The central idea of the paper is to think of a missing type
annotation not as "a special Dyn type that can contain anything" but
"a specific static type, but I don't know which one it is". A problem
is then to be understood as a family of potential programs, one for
each possible static choice that could have been put there. Not all
choices are consistent (type soundness imposes constraints on
different missing annotations), so we can study the space of possible
interpretations -- using only the original, non-gradually-typed system
to make those deductions.</p>
<p >An obvious consequence is that a static type error occurs exactly
when we can prove that there is no possible consistent typing. A much
less obvious contribution is that, when there is a consistent set of
types, we can consider this set as "evidence" that the program may be
correct, and transport evidence along values while running the
program. This gives a runtime semantics for the gradual system that
automatically does what it should -- but it, of course, would fare
terribly in the performance harness described above.</p>
<p ><br ></p>
<p ><strong >Some context</strong></p>
<p >The Abstract Gradual Typing work feels like a real breakthrough,
and it is interesting to idly wonder about which previous works in
particular enabled this advance. I would make two guesses.</p>
<p >First, there was a very nice conceptualization work in 2015,
drawing general principles from existing gradual typing system, and
highlighting in particular a specific difficulty in designing dynamic
semantics for gradual systems (removing annotations must not make
program
fail more).
<p >
<a href="http://homes.soic.indiana.edu/mvitouse/papers/snapl15.pdf">
Refined Criteria for Gradual Typing
</a><br >
by Jeremy Siek, Michael Vitousek, Matteo Cimini, and John Tang Boyland</br>
2015
</p>
<blockquote >
Siek and Taha [2006] coined the term gradual typing to describe
a theory for integrating static and dynamic typing within a single
language that 1) puts the programmer in control of which regions of
code are statically or dynamically typed and 2) enables the gradual
evolution of code between the two typing disciplines. Since 2006, the
term gradual typing has become quite popular but its meaning has
become diluted to encompass anything related to the integration of
static and dynamic typing. This dilution is partly the fault of the
original paper, which provided an incomplete formal characterization
of what it means to be gradually typed. In this paper we draw a crisp
line in the sand that includes a new formal property, named the
gradual guarantee, that relates the behavior of programs that differ
only with respect to their type annotations. We argue that the gradual
guarantee provides important guidance for designers of gradually typed
languages. We survey the gradual typing literature, critiquing designs
in light of the gradual guarantee. We also report on a mechanized
proof that the gradual guarantee holds for the Gradually Typed Lambda
Calculus.
</blockquote>
<p ></p>
<p >Second, the marriage of gradual typing and abstract interpretation
was already consumed in previous work (2014), studying the gradual
classification of <em >effects</em> rather than <em >types</em>.</p>
<p >
<a href="http://pleiad.dcc.uchile.cl/papers/2014/banadosAl-icfp2014.pdf">
A Theory of Gradual Effect Systems
</a><br >
by Felipe Bañados Schwerter, Ronad Garcia, Éric Tanter<br >
2014
</p>
<blockquote >
Effect systems have the potential to help software developers, but
their practical adoption has been very limited. We conjecture that
this limited adoption is due in part to the difficulty of
transitioning from a system where effects are implicit and
unrestricted to a system with a static effect discipline, which must
settle for conservative checking in order to be decidable. To address
this hindrance, we develop a theory of gradual effect checking, which
makes it possible to incrementally annotate and statically check
effects, while still rejecting statically inconsistent programs. We
extend the generic type-and-effect framework of Marino and Millstein
with a notion of unknown effects, which turns out to be significantly
more subtle than unknown types in traditional gradual typing. We
appeal to abstract interpretation to develop and validate the concepts
of gradual effect checking. We also demonstrate how an effect system
formulated in Marino and Millstein’s framework can be automatically
extended to support gradual checking.
</blockquote>
<p >Difficulty rewards: gradual effects are
more difficult than gradual simply-typed systems, so you get strong
and powerful ideas when you study them. The choice of working on
effect systems is also useful in practice, as nicely said by Philip
Wadler in the conclusion of his 2015
article <a href="http://homepages.inf.ed.ac.uk/wadler/papers/complement/complement.pdf">A
Complement to Blame</a>:</p>
<blockquote >
I [Philip Wadler] always assumed gradual types were to help those poor
schmucks using untyped languages to migrate to typed languages. I now
realize that I am one of the poor schmucks. My recent research
involves session types, a linear type system that declares protocols
for sending messages along channels. Sending messages along channels
is an example of an effect. Haskell uses monads to track effects
(Wadler, 1992), and a few experimental languages such as Links
(Cooper et al., 2007), Eff (Bauer and Pretnar, 2014), and Koka
(Leijen, 2014) support effect typing. But, by and large, every
programming language is untyped when it comes to effects. To
encourage migration from legacy code to code with effect types, such
as session types, some form of gradual typing may be essential.
</blockquote>Lambda CalculusSun, 20 Dec 2015 18:45:23 +0000Self-Representation in Girard’s System U
http://lambda-the-ultimate.org/node/5176
<p ><a href="http://compilers.cs.ucla.edu/popl15/popl15-full.pdf">Self-Representation in Girard’s System U</a>, by Matt Brown and Jens Palsberg:</p>
<blockquote ><p >In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?</p>
<p >We show that it is not and present a typed self-representation for Girard’s System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation – the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.</p></blockquote>
<p >Typed self-representation has <a href="http://lambda-the-ultimate.org/node/2438#comment-47966">come up</a> here on LtU <a href="http://lambda-the-ultimate.org/node/4785#comment-76876">in the past</a>. I believe the best self-interpreter available prior to this work was a variant of <a href="http://lambda-the-ultimate.org/node/3993">Barry Jay's SF-calculus</a>, covered in the paper <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.225.1765">Typed Self-Interpretation by Pattern Matching</a> (and more fully developed in <a href="http://www0.cs.ucl.ac.uk/staff/R.Rowe/docs/arb10.pdf">Structural Types for the Factorisation Calculus</a>). These covered statically typed self-interpreters without resorting to undecidable type:type rules.</p>
<p >However, being combinator calculi, they're not very similar to most of our programming languages, and so self-interpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kind-polymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.</p>FunctionalLambda CalculusTheoryType TheoryThu, 11 Jun 2015 18:45:11 +0000A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation
http://lambda-the-ultimate.org/node/5115
<p >The project <a href="http://www.informatik.uni-marburg.de/~pgiarrusso/ILC/">Incremental λ-Calculus</a> is just starting (compared to more mature approaches like <a href="http://www.umut-acar.org/self-adjusting-computation">self-adjusting computation</a>), with a first publication last year.</p>
<p ><a href="http://www.informatik.uni-marburg.de/~pgiarrusso/papers/pldi14-ilc-author-final.pdf">A theory of changes for higher-order languages — incrementalizing λ-calculi by static differentiation</a><br >
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014</p>
<blockquote ><p >If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports first-class functions, and produces derivatives amenable to standard optimization.</p>
<p >We prove the program transformation correct in Agda for a family of simply-typed λ-calculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.</p>
<p >We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.</p></blockquote>
<p >I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value <code >v</code> do not live in some common type <code >diff(T)</code>, but rather in a value-dependent type <code >diff(v)</code>. Intuitively, the empty list and a non-empty list have fairly different types of possible changes. This makes change-merging and change-producing operations total, and allow to give them a nice operational theory. Good design, through types.</p>
<p >(The program transformation seems related to the program-level <a href="https://who.rocq.inria.fr/Marc.Lasson/articles/RealParam/RealParam.pdf">parametricity</a> transformation. Parametricity abstract over equality justifications, differentiation on small differences.)</p>Lambda CalculusWed, 04 Feb 2015 10:00:18 +0000Conservation laws for free!
http://lambda-the-ultimate.org/node/5078
<p >In this year's <A href="http://popl.mpi-sws.org/2014/">POPL</A>, <A href="bentnib.org">Bob Atkey</A> made a splash by showing how to get <A href="http://bentnib.org/conservation-laws.pdf">from parametricity to conservation laws, via Noether's theorem</A>:</p>
<blockquote ><p >
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.</p>
<p > In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
</p></blockquote>Category TheoryFunFunctionalLambda CalculusScientific ProgrammingSemanticsTheoryType TheoryTue, 28 Oct 2014 07:52:46 +0000An operational and axiomatic semantics for non-determinism and sequence points in C
http://lambda-the-ultimate.org/node/5046
<p >In a <a href="http://lambda-the-ultimate.org/node/5038#comment-82300">recent LtU discussion</a>, naasking comments that "I always thought languages that don't specify evaluation order should classify possibly effectful expressions that assume an evaluation order to be errors". Recent work on the C language has provided reasonable formal tools to reason about evaluation order for C, which has very complex evaluation-order rules.</p>
<p ><a href="http://robbertkrebbers.nl/research/articles/expressions.pdf">An operational and axiomatic semantics for non-determinism and sequence points in C</a><br >
Robbert Krebbers<br >
2014</p>
<blockquote >
<p >
The C11 standard of the C programming language does not specify
the execution order of expressions. Besides, to make more effective
optimizations possible (e.g. delaying of side-effects and interleav-
ing), it gives compilers in certain cases the freedom to use even
more behaviors than just those of all execution orders.
</p>
<p >
Widely used C compilers actually exploit this freedom given by
the C standard for optimizations, so it should be taken seriously
in formal verification. This paper presents an operational and ax-
iomatic semantics (based on separation logic) for non-determinism
and sequence points in C. We prove soundness of our axiomatic se-
mantics with respect to our operational semantics. This proof has
been fully formalized using the Coq proof assistant.
</p>
</blockquote>
<p >One aspect of this work that I find particularly interesting is that it provides a program (separation) logic: there is a set of inference rules for a judgment of the form \(\Delta; J; R \vdash \{P\} s \{Q\}\), where \(s\) is a C statement and \(P, Q\) are logical pre,post-conditions such that if it holds, then the statement \(s\) has no undefined behavior related to expression evaluation order. This opens the door to practical verification that existing C program are safe in a very strong way (this is all validated in the Coq theorem prover).</p>Lambda CalculusSun, 14 Sep 2014 09:36:34 +0000Luca Cardelli Festschrift
http://lambda-the-ultimate.org/node/5044
<p >Earlier this week Microsoft Research Cambridge organised a <A href="http://research.microsoft.com/en-us/events/lucacardellifest/">Festschrift</A> for Luca Cardelli. The preface from the <A href="http://research.microsoft.com/pubs/226237/Luca-Cardelli-Fest-MSR-TR-2014-104.pdf">book</A>:</p>
<blockquote ><p >
Luca Cardelli has made exceptional contributions to the world of programming<br >
languages and beyond. Throughout his career, he has re-invented himself every<br >
decade or so, while continuing to make true innovations. His achievements span<br >
many areas: software; language design, including experimental languages;<br >
programming language foundations; and the interaction of programming languages<br >
and biology. These achievements form the basis of his lasting scientific leadership<br >
and his wide impact.<br >
...<br >
Luca is always asking "what is new", and is always looking to<br >
the future. Therefore, we have asked authors to produce short pieces that would<br >
indicate where they are today and where they are going. Some of the resulting<br >
pieces are short scientific papers, or abridged versions of longer papers; others are<br >
less technical, with thoughts on the past and ideas for the future. We hope that<br >
they will all interest Luca.
</p></blockquote>
<p >Hopefully the videos will be posted soon.</p>Category TheoryLambda CalculusMisc BooksSemanticsTheoryType TheoryFri, 12 Sep 2014 10:10:08 +0000Cost semantics for functional languages
http://lambda-the-ultimate.org/node/5021
<p >There is an ongoing discussion in LtU (<a href="http://lambda-the-ultimate.org/node/4971#comment-81955">there</a>, and <a href="http://lambda-the-ultimate.org/node/5020">there</a>) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambda-calculus and functional languages. <a href="http://www.cs.cmu.edu/~./blelloch/">Guy Blelloch</a> and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far.</p>
<p >A portion of the functional programming community has long been of the opinion that we <em >do not</em> need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, self-contained descriptions of computational behavior, which we can elevate to the status of (functional) machine model -- just like "abstract machines" can be seen as just machines.</p>
<p >This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. <a href="http://www.cs.cmu.edu/~./blelloch/">Guy Blelloch</a> developed a family of <em >cost semantics</em>, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are self-contained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of two-way correspondences, with the occasional extra logarithmic overhead -- or, from another point of view, provided probably cost-effective implementations of functional languages in imperative languages and conversely.</p>
<p >This topic has been discussed by Robert Harper in two blog posts, <a href="http://existentialtype.wordpress.com/2011/03/16/languages-and-machines/">Language and Machines</a> which develops the general argument, and <a href="http://existentialtype.wordpress.com/2012/08/26/yet-another-reason-not-to-be-lazy-or-imperative/">a second post</a> on recent joint work by Guy and him on integrating cache-efficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages".</p>
<p >In chronological order, three papers that are representative of the evolution of this work are the following.</p>
<p ><a href="http://www.cs.cmu.edu/~blelloch/papers/BG95.pdf">Parallelism In Sequential Functional Languages</a><br >
Guy E. Blelloch and John Greiner, 1995.<br >
This paper is focused on parallelism, but is also one of the earliest work carefully relating a lambda-calculus cost semantics with several machine models.</p>
<blockquote ><p >This paper formally studies the question of how much parallelism is available in call-by-value functional languages with no parallel extensions (i.e., the functional subsets of ML or Scheme). In particular we are interested in placing bounds on how much parallelism is available for various problems. To do this we introduce a complexity model, the PAL, based on the call-by-value lambda-calculus. The model is defined in terms of a profiling semantics and measures complexity in terms of the total work and the parallel depth of a computation. We describe a simulation of the A-PAL (the PAL extended with arithmetic operations) on various parallel machine models, including the butterfly, hypercube, and PRAM models and prove simulation bounds. In particular the simulations are work-efficient (the processor-time product on the machines is within a constant factor of the work on the A-PAL), and for P processors the slowdown (time on the machines divided by depth on the A-PAL) is proportional to at most O(log P). We also prove bounds for simulating the PRAM on the A-PAL.</p></blockquote>
<p ><a href="http://www.cs.cmu.edu/~blelloch/papers/SBHG11.pdf">Space Profiling for Functional Programs</a><br >
Daniel Spoonhower, Guy E. Blelloch, Robert Harper, and Phillip B. Gibbons, 2011 (conference version 2008)</p>
<p >This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.</p>
<blockquote ><p >We present a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provides a means to reason about performance without requiring a detailed understanding of the compiler or runtime system. It also provides a specification for language implementers. This is critical in that it enables us to separate cleanly the performance of the application from that of the language implementation. Some aspects of the implementation can have significant effects on performance. Our cost semantics enables programmers to understand the impact of different scheduling policies while hiding many of the details of their implementations. We show applications where the choice of scheduling policy has asymptotic effects on space use. We explain these use patterns through a demonstration of our tools. We also validate our methodology by observing similar performance in our implementation of a parallel extension of Standard ML</p></blockquote>
<p ><a href="http://www.cs.cmu.edu/~rwh/papers/iolambda/short.pdf">Cache and I/O efficient functional algorithms</a><br >
Guy E. Blelloch, Robert Harper, 2013 (see also the shorter <a href="http://www.cs.cmu.edu/~rwh/papers.htm#iolambda-cacm">CACM version</a>)</p>
<p >The cost semantics in this last work incorporates more notions from garbage collection, to reason about cache-efficient allocation of values -- in that it relies on work on formalizing garbage collection that has been <a href="http://lambda-the-ultimate.org/node/3274">mentioned on LtU</a> before.</p>
<blockquote ><p >
The widely studied I/O and ideal-cache models were developed to account for the large difference in costs to access memory at different levels of the memory hierarchy. Both models are based on a two level memory hierarchy with a fixed size primary memory (cache) of size \(M\), an unbounded secondary memory, and assume unit cost for transferring blocks of size \(B\) between the two. Many algorithms have been analyzed in these models and indeed these models predict the relative performance of algorithms much more accurately than the standard RAM model. The models, however, require specifying algorithms at a very low level requiring the user to carefully lay out their data in arrays in memory and manage their own memory allocation.</p>
<p >In this paper we present a cost model for analyzing the memory efficiency of algorithms expressed in a simple functional language. We show how many algorithms written in standard forms using just lists and trees (no arrays) and requiring no explicit memory layout or memory management are efficient in the model. We then describe an implementation of the language and show provable bounds for mapping the cost in our model to the cost in the ideal-cache model. These bound imply that purely functional programs based on lists and trees with no special attention to any details of memory layout can be as asymptotically as efficient as the carefully designed imperative I/O efficient algorithms. For example we describe an \(O(\frac{n}{B} \log_{M/B} \frac{n}{B})\) cost sorting algorithm, which is optimal in the ideal cache and I/O models.
</p></blockquote>ImplementationLambda CalculusThu, 14 Aug 2014 11:53:25 +0000Pure Subtype Systems
http://lambda-the-ultimate.org/node/4835
<p ><a href="http://redwood.mza.com/~dhutchins/papers/popl10-hutchins.pdf">Pure Subtype Systems</a>, by DeLesley S. Hutchins:</p>
<blockquote ><p >This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the theory is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined directly over objects. The traditional typing relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality.</p>
<p >Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several problems of practical interest, including mixin modules, virtual classes, and feature-oriented programming.</p>
<p >The cost of using pure subtype systems is the complexity of the meta-theory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is â€œclose enoughâ€ to rule out obvious counter-examples. We present it as an open problem in type theory.</p></blockquote>
<p >A thought-provoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the side-effect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types".</p>
<p >Of course, it's not all sunshine and roses. As the abstract explains, the metatheory is quite complicated and soundness is still an open question. Not too surprising considering type checking <a href="http://www.cs.nott.ac.uk/~txa/publ/msfp08.pdf">Type:Type is undecidable</a>.</p>
<p ><a href="http://redwood.mza.com/~dhutchins/papers/thesis_final.pdf">Hutchins' thesis</a> is also available for a more thorough treatment. This work is all in pursuit of Hitchens' goal of <a href="http://www.infosun.fim.uni-passau.de/cl/publications/docs/TOPLAS10.pdf">feature-oriented programming</a>.</p>Lambda CalculusObject-FunctionalOOPType TheoryFri, 08 Nov 2013 23:53:24 +0000Dependently-Typed Metaprogramming (in Agda)
http://lambda-the-ultimate.org/node/4804
<p ><A href="http://www.strictlypositive.org">Conor McBride</A> gave an 8-lecture summer course on <A href="http://www.cl.cam.ac.uk/~ok259/agda-course-13/">Dependently typed metaprogramming (in Agda)</A> at the <A href="http://www.cl.cam.ac.uk/">Cambridge University Computer Laboratory</A>:</p>
<blockquote ><p >
Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedently-typed programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the â€œpattern functorsâ€ which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of â€œcontainerâ€. We shall expose the double-life containers lead as â€œinteraction structuresâ€ describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductive-recursive definitions, and we use that power to build universes of dependent types.
</p></blockquote>
<p >The <A href="https://github.com/pigworker/MetaprogAgda/blob/master/notes.pdf?raw=true">lecture notes</A>, <A href="https://github.com/pigworker/MetaprogAgda">code</A>, and <A href="http://www.youtube.com/playlist?list=PL_shDsyy0xhKhsBUaVXTJ2uJ78EGBpvQa">video captures</A> are available online. </p>
<p >As with his <A href="http://www.cl.cam.ac.uk/~ok259/agda-course/">previous course</A>, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging.</p>Category TheoryFunctionalLambda CalculusMeta-ProgrammingParadigmsSemanticsTeaching & LearningTheoryType TheoryFri, 30 Aug 2013 07:34:49 +0000