User loginNavigation 
Lambda CalculusThe complexity of abstract machinesI previously wrote about a brand of research by Guy Blelloch on the Cost semantics for functional languages, which let us make precise claim about the complexity of functional programs without leaving their usual and comfortable programming models (betareduction). While the complexity behavior of weak reduction strategies, such as callbyvalue and callbyname, is by now relatively wellunderstood, the lambdacalculus has a much richer range of reduction strategies, in particular those that can reduce under lambdaabstractions, whose complexity behavior is sensibly more subtle and was, until recently, not very well understood. (This has become a practical concern since the rise in usage of proof assistants that must implement reduction under binders and are very concerned about the complexity of their reduction strategy, which consumes a lot of time during type/proofchecking.) Beniamino Accatoli, who has been coauthoring a lot of work in that area, recently published on arXiv a new paper that has survey quality, and is a good introduction to this area of work and other pointers from the literature.
By gasche at 20170112 01:09  Lambda Calculus  login or register to post comments  other blogs  27255 reads
Philip Wadler: Category Theory for the Working HackerNothing you don't already know, if you are inteo this sort of thing (and many if not most LtUers are), but a quick way to get the basic idea if you are not. Wadler has papers that explain CurryHoward better, and the category theory content here is very basic  but it's an easy listen that will give you the fundamental points if you still wonder what this category thing is all about. To make this a bit more fun for those already in the know: what is totally missing from the talk (understandable given time constraints) is why this should interest the "working hacker". So how about pointing out a few cool uses/ideas that discerning hackers will appreciate? Go for it! By Ehud Lamm at 20160807 17:26  Category Theory  Lambda Calculus  Semantics  106 comments  other blogs  39210 reads
Fully Abstract Compilation via Universal EmbeddingFully Abstract Compilation via Universal Embedding by Max S. New, William J. Bowman, and Amal Ahmed:
Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures here and here. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc. By naasking at 20160727 15:57  Lambda Calculus  Semantics  Theory  Type Theory  5 comments  other blogs  36767 reads
Progress on Gradual TypingAmong many interesting works, the POPL 2016 papers have a bunch of nice articles on Gradual Typing. The Gradualizer: a methodology and algorithm for generating gradual type systems
The Gradualizer: a methodology and algorithm for generating gradual type systems
One can think of the Gradualizer as a kind of metaprogramming algorithm that takes a type system in input, and returns a gradual version of this type system as output. I find it interesting that these type systems are encoded as lambdaprolog programs (a notable usecase for functional logic programming). This is a very nice way to bridge the gap between describing a transformation that is "in principle" mechanizable and a running implementation. An interesting phenomenon happening once you want to implement these ideas in practice is that it forced the authors to define precisely many intuitions everyone has when reading the description of a type system as a system of inference rules. These intuitions are, broadly, about the relation between the static and the dynamic semantics of a system, the flow of typing information, and the flow of values; two occurrences of the same type in a typing rule may play very different roles, some of which are discussed in this article. Is Sound Gradual Typing Dead?
Is Sound Gradual Typing Dead?
In a fully dynamic system, typing checks are often superficial (only the existence of a particular field is tested) and done lazily (the check is made when the field is accessed). Gradual typing changes this, as typing assumptions can be made earlier than the value is used, and range over parts of the program that are not exercised in all execution branches. This has the potentially counterintuitive consequence that the overhead of runtime checks may be sensibly larger than for fullydynamic systems. This paper presents a methodology to evaluate the "annotation space" of a Typed Racket program, studying how the possible choices of which parts to annotate affect overall performance. Many would find this article surprisingly grounded in reality for a POPL paper. It puts the spotlight on a question that is too rarely discussed, and could be presented as a strong illustration of why it matters to be serious about implementing our research. Abstracting Gradual Typing
Abstracting Gradual Typing
At first sight this description seems to overlap with the Gradualizer work cited above, but in fact the two approaches are highly complementary. The Abstract Gradual Typing effort seems mechanizable, but it is far from being implementable in practice as done in the Gradualizer work. It remains a translation to be done on paper by skilled expert, although, as standard in abstract interpretation works, many aspects are deeply computational  computing the best abstractions. On the other hand, it is extremely powerful to guide system design, as it provides not only a static semantics for a gradual system, but also a model dynamic semantics. The central idea of the paper is to think of a missing type annotation not as "a special Dyn type that can contain anything" but "a specific static type, but I don't know which one it is". A problem is then to be understood as a family of potential programs, one for each possible static choice that could have been put there. Not all choices are consistent (type soundness imposes constraints on different missing annotations), so we can study the space of possible interpretations  using only the original, nongraduallytyped system to make those deductions. An obvious consequence is that a static type error occurs exactly when we can prove that there is no possible consistent typing. A much less obvious contribution is that, when there is a consistent set of types, we can consider this set as "evidence" that the program may be correct, and transport evidence along values while running the program. This gives a runtime semantics for the gradual system that automatically does what it should  but it, of course, would fare terribly in the performance harness described above. Some context The Abstract Gradual Typing work feels like a real breakthrough, and it is interesting to idly wonder about which previous works in particular enabled this advance. I would make two guesses. First, there was a very nice conceptualization work in 2015, drawing general principles from existing gradual typing system, and highlighting in particular a specific difficulty in designing dynamic semantics for gradual systems (removing annotations must not make program fail more).
Refined Criteria for Gradual Typing
Siek and Taha [2006] coined the term gradual typing to describe a theory for integrating static and dynamic typing within a single language that 1) puts the programmer in control of which regions of code are statically or dynamically typed and 2) enables the gradual evolution of code between the two typing disciplines. Since 2006, the term gradual typing has become quite popular but its meaning has become diluted to encompass anything related to the integration of static and dynamic typing. This dilution is partly the fault of the original paper, which provided an incomplete formal characterization of what it means to be gradually typed. In this paper we draw a crisp line in the sand that includes a new formal property, named the gradual guarantee, that relates the behavior of programs that differ only with respect to their type annotations. We argue that the gradual guarantee provides important guidance for designers of gradually typed languages. We survey the gradual typing literature, critiquing designs in light of the gradual guarantee. We also report on a mechanized proof that the gradual guarantee holds for the Gradually Typed Lambda Calculus. Second, the marriage of gradual typing and abstract interpretation was already consumed in previous work (2014), studying the gradual classification of effects rather than types.
A Theory of Gradual Effect Systems
Effect systems have the potential to help software developers, but their practical adoption has been very limited. We conjecture that this limited adoption is due in part to the difficulty of transitioning from a system where effects are implicit and unrestricted to a system with a static effect discipline, which must settle for conservative checking in order to be decidable. To address this hindrance, we develop a theory of gradual effect checking, which makes it possible to incrementally annotate and statically check effects, while still rejecting statically inconsistent programs. We extend the generic typeandeffect framework of Marino and Millstein with a notion of unknown effects, which turns out to be significantly more subtle than unknown types in traditional gradual typing. We appeal to abstract interpretation to develop and validate the concepts of gradual effect checking. We also demonstrate how an effect system formulated in Marino and Millstein’s framework can be automatically extended to support gradual checking. Difficulty rewards: gradual effects are more difficult than gradual simplytyped systems, so you get strong and powerful ideas when you study them. The choice of working on effect systems is also useful in practice, as nicely said by Philip Wadler in the conclusion of his 2015 article A Complement to Blame: I [Philip Wadler] always assumed gradual types were to help those poor schmucks using untyped languages to migrate to typed languages. I now realize that I am one of the poor schmucks. My recent research involves session types, a linear type system that declares protocols for sending messages along channels. Sending messages along channels is an example of an effect. Haskell uses monads to track effects (Wadler, 1992), and a few experimental languages such as Links (Cooper et al., 2007), Eff (Bauer and Pretnar, 2014), and Koka (Leijen, 2014) support effect typing. But, by and large, every programming language is untyped when it comes to effects. To encourage migration from legacy code to code with effect types, such as session types, some form of gradual typing may be essential. SelfRepresentation in Girard’s System USelfRepresentation in Girard’s System U, by Matt Brown and Jens Palsberg:
Typed selfrepresentation has come up here on LtU in the past. I believe the best selfinterpreter available prior to this work was a variant of Barry Jay's SFcalculus, covered in the paper Typed SelfInterpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed selfinterpreters without resorting to undecidable type:type rules. However, being combinator calculi, they're not very similar to most of our programming languages, and so selfinterpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kindpolymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether selfinterpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem. By naasking at 20150611 18:45  Functional  Lambda Calculus  Theory  Type Theory  28 comments  other blogs  10930 reads
A theory of changes for higherorder languages — incrementalizing λcalculi by static differentiationThe project Incremental λCalculus is just starting (compared to more mature approaches like selfadjusting computation), with a first publication last year. A theory of changes for higherorder languages — incrementalizing λcalculi by static differentiation
I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value (The program transformation seems related to the programlevel parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.) Conservation laws for free!In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:
By Ohad Kammar at 20141028 07:52  Category Theory  Fun  Functional  Lambda Calculus  Scientific Programming  Semantics  Theory  Type Theory  5 comments  other blogs  15491 reads
An operational and axiomatic semantics for nondeterminism and sequence points in CIn a recent LtU discussion, naasking comments that "I always thought languages that don't specify evaluation order should classify possibly effectful expressions that assume an evaluation order to be errors". Recent work on the C language has provided reasonable formal tools to reason about evaluation order for C, which has very complex evaluationorder rules. An operational and axiomatic semantics for nondeterminism and sequence points in C
One aspect of this work that I find particularly interesting is that it provides a program (separation) logic: there is a set of inference rules for a judgment of the form \(\Delta; J; R \vdash \{P\} s \{Q\}\), where \(s\) is a C statement and \(P, Q\) are logical pre,postconditions such that if it holds, then the statement \(s\) has no undefined behavior related to expression evaluation order. This opens the door to practical verification that existing C program are safe in a very strong way (this is all validated in the Coq theorem prover). Luca Cardelli FestschriftEarlier this week Microsoft Research Cambridge organised a Festschrift for Luca Cardelli. The preface from the book:
Hopefully the videos will be posted soon. By Ohad Kammar at 20140912 10:10  Category Theory  Lambda Calculus  Misc Books  Semantics  Theory  Type Theory  4 comments  other blogs  6746 reads
Cost semantics for functional languagesThere is an ongoing discussion in LtU (there, and there) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambdacalculus and functional languages. Guy Blelloch and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far. A portion of the functional programming community has long been of the opinion that we do not need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, selfcontained descriptions of computational behavior, which we can elevate to the status of (functional) machine model  just like "abstract machines" can be seen as just machines. This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. Guy Blelloch developed a family of cost semantics, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are selfcontained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of twoway correspondences, with the occasional extra logarithmic overhead  or, from another point of view, provided probably costeffective implementations of functional languages in imperative languages and conversely. This topic has been discussed by Robert Harper in two blog posts, Language and Machines which develops the general argument, and a second post on recent joint work by Guy and him on integrating cacheefficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages". In chronological order, three papers that are representative of the evolution of this work are the following. Parallelism In Sequential Functional Languages
Space Profiling for Functional Programs This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.
Cache and I/O efficient functional algorithms The cost semantics in this last work incorporates more notions from garbage collection, to reason about cacheefficient allocation of values  in that it relies on work on formalizing garbage collection that has been mentioned on LtU before.
By gasche at 20140814 11:53  Implementation  Lambda Calculus  33 comments  other blogs  30542 reads

Browse archivesActive forum topics 
Recent comments
1 day 3 hours ago
1 day 14 hours ago
1 day 14 hours ago
1 day 16 hours ago
1 day 16 hours ago
1 day 21 hours ago
2 days 11 min ago
2 days 6 hours ago
2 days 15 hours ago
2 days 16 hours ago