User loginNavigation |
Lambda CalculusLAMBDA: The ultimate Excel worksheet functionPost by Andy Gordon and Simon Peyton Jones on LAMBDA giving Excel users the ability to define functions.
Sequent Calculus as a Compiler Intermediate LanguageSequent Calculus as a Compiler Intermediate Language
By Andris Birkmanis at 2018-04-02 17:06 | Functional | General | Lambda Calculus | Semantics | Type Theory | 2 comments | other blogs | 45630 reads
Implementing Algebraic Effects in CImplementing Algebraic Effects in C by Daan Leijen:
Another great paper by Daan Leijen, this time on a C library with immediate practical applications at Microsoft. The applicability is much wider though, since it's an ordinary C library for defining and using arbitrary algebraic effects. It looks pretty usable and is faster and more general than most of the C coroutine libraries that already exist. It's a nice addition to your toolbox for creating language runtimes in C, particularly since it provides a unified, structured way of creating and handling a variety of sophisticated language behaviours, like async/await, in ordinary C with good performance. There has been considerable discussion here of C and low-level languages with green threads, coroutines and so on, so hopefully others will find this useful! By naasking at 2017-07-27 13:50 | Effects | Implementation | Lambda Calculus | Semantics | login or register to post comments | other blogs | 35371 reads
Imperative Functional Programs that Explain their Work
Imperative Functional Programs that Explain their Work
Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al., where dynamic slicing was defined for pure higher-order functional programs as a Galois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation. Dynamic slicing answers the following question: if I only care about these specific part of the trace of my program execution, what are the only parts of the source program that I need to look at? For example, if the output of the program is a pair, can you show me that parts of the source that impacted the computation of the first component? If a part of the code is not involved in the trace, or not in the part of the trace that you care about, it is removed from the partial code returned by slicing. What I like about this work is that there is a very nice algebraic characterization of what slicing is (the Galois connection), that guides you in how you implement your slicing algorithm, and also serves as a specification to convince yourself that it is correct -- and "optimal", it actually removes all the program parts that are irrelevant. This characterization already existed in previous work (Functional Programs that Explain Their Work, Roly Perera, Umut Acar, James cheney, Paul Blain Levy, 2012), but it was done in a purely functional setting. It wasn't clear (to me) whether the nice formulation was restricted to this nice language, or whether the technique itself would scale to a less structured language. This paper extends it to effectful ML (mutable references and exceptions), and there it is much easier to see that it remains elegant and yet can scale to typical effectful programming languages. The key to the algebraic characterization is to recognize two order structures, one on source program fragment, and the other on traces. Program fragments are programs with hole, and a fragment is smaller than another if it has more holes. You can think of the hole as "I don't know -- or I don't care -- what the program does in this part", so the order is "being more or less defined". Traces are also partial traces with holes, where the holes means "I don't know -- or I don't care -- what happens in this part of the trace". The double "don't know" and "don't care" nature of the ordering is essential: the Galois connection specifies a slicer (that goes from the part of a trace you care about to the parts of a program you should care about) by relating it to an evaluator (that goes from the part of the program you know about to the parts of the trace you can know about). This specification is simple because we are all familiar with what evaluators are. Type Systems as MacrosType Systems as Macros, by Stephen Chang, Alex Knauth, Ben Greenman:
This looks pretty awesome considering it's not limited to simple typed languages, but extends all the way to System F and F-omega! Even better, they can reuse previous type systems to define new ones, thereby reducing the effort to implement more expressive type systems. All code and further details available here, and here's a blog post where Ben Greenman further discusses the related "type tailoring", and of course, these are both directly related to Active Libraries. Taken to its extreme, why not have an assembler with a powerful macro system of this sort as your host language, and every high-level language would be built on this. I'm not sure if this approach would extend that far, but it's an interesting idea. You'd need a cpp-like highly portable macro tool, and porting to a new platform consists of writing architecture-specific macros for some core language, like System F. This work may also conceptually dovetail with another thread discussing fexprs and compilation. By naasking at 2017-04-19 23:38 | DSL | Functional | Lambda Calculus | Meta-Programming | Type Theory | 15 comments | other blogs | 56990 reads
The complexity of abstract machinesI previously wrote about a brand of research by Guy Blelloch on the Cost semantics for functional languages, which let us make precise claim about the complexity of functional programs without leaving their usual and comfortable programming models (beta-reduction). While the complexity behavior of weak reduction strategies, such as call-by-value and call-by-name, is by now relatively well-understood, the lambda-calculus has a much richer range of reduction strategies, in particular those that can reduce under lambda-abstractions, whose complexity behavior is sensibly more subtle and was, until recently, not very well understood. (This has become a practical concern since the rise in usage of proof assistants that must implement reduction under binders and are very concerned about the complexity of their reduction strategy, which consumes a lot of time during type/proof-checking.) Beniamino Accatoli, who has been co-authoring a lot of work in that area, recently published on arXiv a new paper that has survey quality, and is a good introduction to this area of work and other pointers from the literature.
By gasche at 2017-01-12 01:09 | Lambda Calculus | login or register to post comments | other blogs | 52540 reads
Philip Wadler: Category Theory for the Working HackerNothing you don't already know, if you are inteo this sort of thing (and many if not most LtU-ers are), but a quick way to get the basic idea if you are not. Wadler has papers that explain Curry-Howard better, and the category theory content here is very basic -- but it's an easy listen that will give you the fundamental points if you still wonder what this category thing is all about. To make this a bit more fun for those already in the know: what is totally missing from the talk (understandable given time constraints) is why this should interest the "working hacker". So how about pointing out a few cool uses/ideas that discerning hackers will appreciate? Go for it! By Ehud Lamm at 2016-08-07 17:26 | Category Theory | Lambda Calculus | Semantics | 106 comments | other blogs | 57708 reads
Fully Abstract Compilation via Universal EmbeddingFully Abstract Compilation via Universal Embedding by Max S. New, William J. Bowman, and Amal Ahmed:
Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures here and here. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc. By naasking at 2016-07-27 15:57 | Lambda Calculus | Semantics | Theory | Type Theory | 5 comments | other blogs | 45539 reads
Progress on Gradual TypingAmong many interesting works, the POPL 2016 papers have a bunch of nice articles on Gradual Typing. The Gradualizer: a methodology and algorithm for generating gradual type systems
The Gradualizer: a methodology and algorithm for generating gradual type systems
One can think of the Gradualizer as a kind of meta-programming algorithm that takes a type system in input, and returns a gradual version of this type system as output. I find it interesting that these type systems are encoded as lambda-prolog programs (a notable use-case for functional logic programming). This is a very nice way to bridge the gap between describing a transformation that is "in principle" mechanizable and a running implementation. An interesting phenomenon happening once you want to implement these ideas in practice is that it forced the authors to define precisely many intuitions everyone has when reading the description of a type system as a system of inference rules. These intuitions are, broadly, about the relation between the static and the dynamic semantics of a system, the flow of typing information, and the flow of values; two occurrences of the same type in a typing rule may play very different roles, some of which are discussed in this article. Is Sound Gradual Typing Dead?
Is Sound Gradual Typing Dead?
In a fully dynamic system, typing checks are often superficial (only the existence of a particular field is tested) and done lazily (the check is made when the field is accessed). Gradual typing changes this, as typing assumptions can be made earlier than the value is used, and range over parts of the program that are not exercised in all execution branches. This has the potentially counter-intuitive consequence that the overhead of runtime checks may be sensibly larger than for fully-dynamic systems. This paper presents a methodology to evaluate the "annotation space" of a Typed Racket program, studying how the possible choices of which parts to annotate affect overall performance. Many would find this article surprisingly grounded in reality for a POPL paper. It puts the spotlight on a question that is too rarely discussed, and could be presented as a strong illustration of why it matters to be serious about implementing our research. Abstracting Gradual Typing
Abstracting Gradual Typing
At first sight this description seems to overlap with the Gradualizer work cited above, but in fact the two approaches are highly complementary. The Abstract Gradual Typing effort seems mechanizable, but it is far from being implementable in practice as done in the Gradualizer work. It remains a translation to be done on paper by skilled expert, although, as standard in abstract interpretation works, many aspects are deeply computational -- computing the best abstractions. On the other hand, it is extremely powerful to guide system design, as it provides not only a static semantics for a gradual system, but also a model dynamic semantics. The central idea of the paper is to think of a missing type annotation not as "a special Dyn type that can contain anything" but "a specific static type, but I don't know which one it is". A problem is then to be understood as a family of potential programs, one for each possible static choice that could have been put there. Not all choices are consistent (type soundness imposes constraints on different missing annotations), so we can study the space of possible interpretations -- using only the original, non-gradually-typed system to make those deductions. An obvious consequence is that a static type error occurs exactly when we can prove that there is no possible consistent typing. A much less obvious contribution is that, when there is a consistent set of types, we can consider this set as "evidence" that the program may be correct, and transport evidence along values while running the program. This gives a runtime semantics for the gradual system that automatically does what it should -- but it, of course, would fare terribly in the performance harness described above. Some context The Abstract Gradual Typing work feels like a real breakthrough, and it is interesting to idly wonder about which previous works in particular enabled this advance. I would make two guesses. First, there was a very nice conceptualization work in 2015, drawing general principles from existing gradual typing system, and highlighting in particular a specific difficulty in designing dynamic semantics for gradual systems (removing annotations must not make program fail more).
Refined Criteria for Gradual Typing
Siek and Taha [2006] coined the term gradual typing to describe a theory for integrating static and dynamic typing within a single language that 1) puts the programmer in control of which regions of code are statically or dynamically typed and 2) enables the gradual evolution of code between the two typing disciplines. Since 2006, the term gradual typing has become quite popular but its meaning has become diluted to encompass anything related to the integration of static and dynamic typing. This dilution is partly the fault of the original paper, which provided an incomplete formal characterization of what it means to be gradually typed. In this paper we draw a crisp line in the sand that includes a new formal property, named the gradual guarantee, that relates the behavior of programs that differ only with respect to their type annotations. We argue that the gradual guarantee provides important guidance for designers of gradually typed languages. We survey the gradual typing literature, critiquing designs in light of the gradual guarantee. We also report on a mechanized proof that the gradual guarantee holds for the Gradually Typed Lambda Calculus. Second, the marriage of gradual typing and abstract interpretation was already consumed in previous work (2014), studying the gradual classification of effects rather than types.
A Theory of Gradual Effect Systems
Effect systems have the potential to help software developers, but their practical adoption has been very limited. We conjecture that this limited adoption is due in part to the difficulty of transitioning from a system where effects are implicit and unrestricted to a system with a static effect discipline, which must settle for conservative checking in order to be decidable. To address this hindrance, we develop a theory of gradual effect checking, which makes it possible to incrementally annotate and statically check effects, while still rejecting statically inconsistent programs. We extend the generic type-and-effect framework of Marino and Millstein with a notion of unknown effects, which turns out to be significantly more subtle than unknown types in traditional gradual typing. We appeal to abstract interpretation to develop and validate the concepts of gradual effect checking. We also demonstrate how an effect system formulated in Marino and Millstein’s framework can be automatically extended to support gradual checking. Difficulty rewards: gradual effects are more difficult than gradual simply-typed systems, so you get strong and powerful ideas when you study them. The choice of working on effect systems is also useful in practice, as nicely said by Philip Wadler in the conclusion of his 2015 article A Complement to Blame: I [Philip Wadler] always assumed gradual types were to help those poor schmucks using untyped languages to migrate to typed languages. I now realize that I am one of the poor schmucks. My recent research involves session types, a linear type system that declares protocols for sending messages along channels. Sending messages along channels is an example of an effect. Haskell uses monads to track effects (Wadler, 1992), and a few experimental languages such as Links (Cooper et al., 2007), Eff (Bauer and Pretnar, 2014), and Koka (Leijen, 2014) support effect typing. But, by and large, every programming language is untyped when it comes to effects. To encourage migration from legacy code to code with effect types, such as session types, some form of gradual typing may be essential. Self-Representation in Girard’s System USelf-Representation in Girard’s System U, by Matt Brown and Jens Palsberg:
Typed self-representation has come up here on LtU in the past. I believe the best self-interpreter available prior to this work was a variant of Barry Jay's SF-calculus, covered in the paper Typed Self-Interpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed self-interpreters without resorting to undecidable type:type rules. However, being combinator calculi, they're not very similar to most of our programming languages, and so self-interpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kind-polymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem. By naasking at 2015-06-11 18:45 | Functional | Lambda Calculus | Theory | Type Theory | 28 comments | other blogs | 15842 reads
|
Browse archives
Active forum topics |
Recent comments
27 weeks 1 day ago
27 weeks 1 day ago
27 weeks 1 day ago
49 weeks 2 days ago
1 year 1 week ago
1 year 3 weeks ago
1 year 3 weeks ago
1 year 5 weeks ago
1 year 10 weeks ago
1 year 10 weeks ago