Lambda Calculus
Fully Abstract Compilation via Universal Embedding by Max S. New, William J. Bowman, and Amal Ahmed:
A fully abstract compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: targetlanguage attackers can make no more observations of a compiled component than a sourcelanguage attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be backtranslated to a behaviorally equivalent source context.
We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translation—specifically, closure conversion of simply typed λcalculus with recursive types—uses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than sourcelevel attackers. We present a new backtranslation technique based on a deep embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the stronglytyped source. This technique allows backtranslating nonterminating programs, target features that are untypeable in the source, and wellbracketed effects.
Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures here and here. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc.
Among many interesting works,
the POPL 2016
papers have a bunch of nice articles
on Gradual
Typing.
The Gradualizer: a methodology and algorithm for generating gradual type systems
The Gradualizer: a methodology and algorithm for generating gradual type systems
by Matteo Cimini, Jeremy Siek
2016
Many languages are beginning to integrate dynamic and static
typing. Siek and Taha offered gradual typing as an approach to this
integration that provides the benefits of a coherent and fullspan
migration between the two disciplines. However, the literature lacks
a general methodology for designing gradually typed languages. Our
first contribution is to provide such a methodology insofar as the
static aspects of gradual typing are concerned: deriving the gradual
type system and the compilation to the cast calculus.
Based on this methodology, we present the Gradualizer, an algorithm
that generates a gradual type system from a normal type system
(expressed as a logic program) and generates a compiler to the cast
calculus. Our algorithm handles a large class of type systems and
generates systems that are correct with respect to the formal criteria
of gradual typing. We also report on an implementation of the
Gradualizer that takes type systems expressed in lambdaprolog and
outputs their gradually typed version (and compiler to the
cast calculus) in lambdaprolog.
One can think of the Gradualizer as a kind of metaprogramming
algorithm that takes a type system in input, and returns a gradual
version of this type system as output. I find it interesting that
these type systems are encoded
as lambdaprolog
programs (a notable usecase for functional
logic programming). This is a very nice way to bridge the gap between
describing a transformation that is "in principle" mechanizable and
a running
implementation.
An interesting phenomenon happening once you want to implement
these ideas in practice is that it forced the authors to define
precisely many intuitions everyone has when reading the description of
a type system as a system of inference rules. These intuitions are,
broadly, about the relation between the static and the dynamic
semantics of a system, the flow of typing information, and the flow of
values; two occurrences of the same type in a typing rule may play very
different roles, some of which are discussed in this article.
Is Sound Gradual Typing Dead?
Is Sound Gradual Typing Dead?
by Asumu Takikawa, Daniel Feltey, Ben Greenman, Max New, Jan Vitek, Matthias Felleisen
2016
Programmers have come to embrace dynamically typed languages for
prototyping and delivering large and complex systems. When it comes to
maintaining and evolving these systems, the lack of explicit static
typing becomes a bottleneck. In response, researchers have explored
the idea of gradually typed programming languages which allow the
posthoc addition of type annotations to software written in one of
these “untyped” languages. Some of these new hybrid languages insert
runtime checks at the boundary between typed and untyped code to
establish type soundness for the overall system. With sound gradual
typing programmers can rely on the language implementation to provide
meaningful error messages when “untyped” code misbehaves.
While most research on sound gradual typing has remained theoretical,
the few emerging implementations incur performance overheads due to
these checks. Indeed, none of the publications on this topic come with
a comprehensive performance evaluation; a few report disastrous
numbers on toy benchmarks. In response, this paper proposes
a methodology for evaluating the performance of gradually typed
programming languages. The key is to explore the performance impact of
adding type annotations to different parts of a software system. The
methodology takes takes the idea of a gradual conversion from untyped
to typed seriously and calls for measuring the performance of all
possible conversions of a given untyped benchmark. Finally the paper
validates the proposed methodology using Typed Racket, a mature
implementation of sound gradual typing, and a suite of realworld
programs of various sizes and complexities. Based on the results
obtained in this study, the paper concludes that, given the state of
current implementation technologies, sound gradual typing is
dead. Conversely, it raises the question of how implementations could
reduce the overheads associated with ensuring soundness and how tools
could be used to steer programmers clear from pathological cases.
In a fully dynamic system, typing checks are often superficial
(only the existence of a particular field is tested) and done lazily
(the check is made when the field is accessed). Gradual typing changes
this, as typing assumptions can be made earlier than the value is
used, and range over parts of the program that are not exercised in
all execution branches. This has the potentially counterintuitive
consequence that the overhead of runtime checks may be sensibly larger
than for fullydynamic systems. This paper presents a methodology to
evaluate the "annotation space" of a Typed Racket program, studying
how the possible choices of which parts to annotate affect overall
performance.
Many would find this article surprisingly grounded in reality for
a POPL paper. It puts the spotlight on a question that is too rarely
discussed, and could be presented as a strong illustration of why it
matters to be serious about implementing our research.
Abstracting Gradual Typing
Abstracting Gradual Typing
by Ronald Garcia, Alison M. Clark, Éric Tanter
2016
Language researchers and designers have extended a wide variety of
type systems to support gradual typing, which enables languages to
seamlessly combine dynamic and static checking. These efforts
consistently demonstrate that designing a satisfactory gradual
counterpart to a static type system is challenging, and this challenge
only increases with the sophistication of the type system. Gradual
type system designers need more formal tools to help them
conceptualize, structure, and evaluate their designs.
In this paper, we propose a new formal foundation for gradual
typing, drawing on principles from abstract interpretation to give
gradual types a semantics in terms of preexisting static
types. Abstracting Gradual Typing (AGT for short) yields a formal
account of consistency—one of the cornerstones of the gradual typing
approach—that subsumes existing notions of consistency, which were
developed through intuition and ad hoc reasoning.
Given a syntaxdirected static typing judgment, the AGT approach
induces a corresponding gradual typing judgment. Then the
subjectreduction proof for the underlying static discipline induces
a dynamic semantics for gradual programs defined over sourcelanguage
typing derivations. The AGT approach does not recourse to an
externally justified cast calculus: instead, runtime checks naturally
arise by deducing evidence for consistent judgments during
proofreduction.
To illustrate our approach, we develop novel graduallytyped
counterparts for two languages: one with record subtyping and one with
informationflow security types. Gradual languages designed with the
AGT approach satisfy, by construction, the refined criteria for
gradual typing set forth by Siek and colleagues.
At first sight this description seems to overlap with the
Gradualizer work cited above, but in fact the two approaches are
highly complementary. The Abstract Gradual Typing effort seems
mechanizable, but it is far from being implementable in practice as
done in the Gradualizer work. It remains a translation to be done on
paper by skilled expert, although, as standard in abstract
interpretation works, many aspects are deeply computational 
computing the best abstractions. On the other hand, it is extremely
powerful to guide system design, as it provides not only a static
semantics for a gradual system, but also a model dynamic
semantics.
The central idea of the paper is to think of a missing type
annotation not as "a special Dyn type that can contain anything" but
"a specific static type, but I don't know which one it is". A problem
is then to be understood as a family of potential programs, one for
each possible static choice that could have been put there. Not all
choices are consistent (type soundness imposes constraints on
different missing annotations), so we can study the space of possible
interpretations  using only the original, nongraduallytyped system
to make those deductions.
An obvious consequence is that a static type error occurs exactly
when we can prove that there is no possible consistent typing. A much
less obvious contribution is that, when there is a consistent set of
types, we can consider this set as "evidence" that the program may be
correct, and transport evidence along values while running the
program. This gives a runtime semantics for the gradual system that
automatically does what it should  but it, of course, would fare
terribly in the performance harness described above.
Some context
The Abstract Gradual Typing work feels like a real breakthrough,
and it is interesting to idly wonder about which previous works in
particular enabled this advance. I would make two guesses.
First, there was a very nice conceptualization work in 2015,
drawing general principles from existing gradual typing system, and
highlighting in particular a specific difficulty in designing dynamic
semantics for gradual systems (removing annotations must not make
program
fail more).
Refined Criteria for Gradual Typing
by Jeremy Siek, Michael Vitousek, Matteo Cimini, and John Tang Boyland
2015
Siek and Taha [2006] coined the term gradual typing to describe
a theory for integrating static and dynamic typing within a single
language that 1) puts the programmer in control of which regions of
code are statically or dynamically typed and 2) enables the gradual
evolution of code between the two typing disciplines. Since 2006, the
term gradual typing has become quite popular but its meaning has
become diluted to encompass anything related to the integration of
static and dynamic typing. This dilution is partly the fault of the
original paper, which provided an incomplete formal characterization
of what it means to be gradually typed. In this paper we draw a crisp
line in the sand that includes a new formal property, named the
gradual guarantee, that relates the behavior of programs that differ
only with respect to their type annotations. We argue that the gradual
guarantee provides important guidance for designers of gradually typed
languages. We survey the gradual typing literature, critiquing designs
in light of the gradual guarantee. We also report on a mechanized
proof that the gradual guarantee holds for the Gradually Typed Lambda
Calculus.
Second, the marriage of gradual typing and abstract interpretation
was already consumed in previous work (2014), studying the gradual
classification of effects rather than types.
A Theory of Gradual Effect Systems
by Felipe Bañados Schwerter, Ronad Garcia, Éric Tanter
2014
Effect systems have the potential to help software developers, but
their practical adoption has been very limited. We conjecture that
this limited adoption is due in part to the difficulty of
transitioning from a system where effects are implicit and
unrestricted to a system with a static effect discipline, which must
settle for conservative checking in order to be decidable. To address
this hindrance, we develop a theory of gradual effect checking, which
makes it possible to incrementally annotate and statically check
effects, while still rejecting statically inconsistent programs. We
extend the generic typeandeffect framework of Marino and Millstein
with a notion of unknown effects, which turns out to be significantly
more subtle than unknown types in traditional gradual typing. We
appeal to abstract interpretation to develop and validate the concepts
of gradual effect checking. We also demonstrate how an effect system
formulated in Marino and Millstein’s framework can be automatically
extended to support gradual checking.
Difficulty rewards: gradual effects are
more difficult than gradual simplytyped systems, so you get strong
and powerful ideas when you study them. The choice of working on
effect systems is also useful in practice, as nicely said by Philip
Wadler in the conclusion of his 2015
article A
Complement to Blame:
I [Philip Wadler] always assumed gradual types were to help those poor
schmucks using untyped languages to migrate to typed languages. I now
realize that I am one of the poor schmucks. My recent research
involves session types, a linear type system that declares protocols
for sending messages along channels. Sending messages along channels
is an example of an effect. Haskell uses monads to track effects
(Wadler, 1992), and a few experimental languages such as Links
(Cooper et al., 2007), Eff (Bauer and Pretnar, 2014), and Koka
(Leijen, 2014) support effect typing. But, by and large, every
programming language is untyped when it comes to effects. To
encourage migration from legacy code to code with effect types, such
as session types, some form of gradual typing may be essential.
SelfRepresentation in Girard’s System U, by Matt Brown and Jens Palsberg:
In 1991, Pfenning and Lee studied whether System F could support a typed selfinterpreter. They concluded that typed selfrepresentation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kindpolymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?
We show that it is not and present a typed selfrepresentation for Girard’s System U, the first for a λcalculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our selfrepresentation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed selfapplicable operations: a selfinterpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuationpassingstyle (CPS) transformation – the first typed selfapplicable CPS transformation. Our techniques could have applications from verifiably typepreserving metaprograms, to growable typed languages, to more efficient selfinterpreters.
Typed selfrepresentation has come up here on LtU in the past. I believe the best selfinterpreter available prior to this work was a variant of Barry Jay's SFcalculus, covered in the paper Typed SelfInterpretation by Pattern Matching (and more fully developed in Structural Types for the Factorisation Calculus). These covered statically typed selfinterpreters without resorting to undecidable type:type rules.
However, being combinator calculi, they're not very similar to most of our programming languages, and so selfinterpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kindpolymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether selfinterpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.
The project Incremental λCalculus is just starting (compared to more mature approaches like selfadjusting computation), with a first publication last year.
A theory of changes for higherorder languages — incrementalizing λcalculi by static differentiation
Paolo Giarusso, Yufei Cai, Tillmann Rendel, and Klaus Ostermann. 2014
If the result of an expensive computation is invalidated by a small change to the input, the old result should be updated incrementally instead of reexecuting the whole computation. We incrementalize programs through their derivative. A derivative maps changes in the program’s input directly to changes in the program’s output, without reexecuting the original program. We present a program transformation taking programs to their derivatives, which is fully static and automatic, supports firstclass functions, and produces derivatives amenable to standard optimization.
We prove the program transformation correct in Agda for a family of simplytyped λcalculi, parameterized by base types and primitives. A precise interface specifies what is required to incrementalize the chosen primitives.
We investigate performance by a case study: We implement in Scala the program transformation, a plugin and improve performance of a nontrivial program by orders of magnitude.
I like the nice dependent types: a key idea of this work is that the "diffs" possible from a value v do not live in some common type diff(T) , but rather in a valuedependent type diff(v) . Intuitively, the empty list and a nonempty list have fairly different types of possible changes. This makes changemerging and changeproducing operations total, and allow to give them a nice operational theory. Good design, through types.
(The program transformation seems related to the programlevel parametricity transformation. Parametricity abstract over equality justifications, differentiation on small differences.)
In this year's POPL, Bob Atkey made a splash by showing how to get from parametricity to conservation laws, via Noether's theorem:
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under timeshifting.
In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
In a recent LtU discussion, naasking comments that "I always thought languages that don't specify evaluation order should classify possibly effectful expressions that assume an evaluation order to be errors". Recent work on the C language has provided reasonable formal tools to reason about evaluation order for C, which has very complex evaluationorder rules.
An operational and axiomatic semantics for nondeterminism and sequence points in C
Robbert Krebbers
2014
The C11 standard of the C programming language does not specify
the execution order of expressions. Besides, to make more effective
optimizations possible (e.g. delaying of sideeffects and interleav
ing), it gives compilers in certain cases the freedom to use even
more behaviors than just those of all execution orders.
Widely used C compilers actually exploit this freedom given by
the C standard for optimizations, so it should be taken seriously
in formal verification. This paper presents an operational and ax
iomatic semantics (based on separation logic) for nondeterminism
and sequence points in C. We prove soundness of our axiomatic se
mantics with respect to our operational semantics. This proof has
been fully formalized using the Coq proof assistant.
One aspect of this work that I find particularly interesting is that it provides a program (separation) logic: there is a set of inference rules for a judgment of the form \(\Delta; J; R \vdash \{P\} s \{Q\}\), where \(s\) is a C statement and \(P, Q\) are logical pre,postconditions such that if it holds, then the statement \(s\) has no undefined behavior related to expression evaluation order. This opens the door to practical verification that existing C program are safe in a very strong way (this is all validated in the Coq theorem prover).
Earlier this week Microsoft Research Cambridge organised a Festschrift for Luca Cardelli. The preface from the book:
Luca Cardelli has made exceptional contributions to the world of programming
languages and beyond. Throughout his career, he has reinvented himself every
decade or so, while continuing to make true innovations. His achievements span
many areas: software; language design, including experimental languages;
programming language foundations; and the interaction of programming languages
and biology. These achievements form the basis of his lasting scientific leadership
and his wide impact.
...
Luca is always asking "what is new", and is always looking to
the future. Therefore, we have asked authors to produce short pieces that would
indicate where they are today and where they are going. Some of the resulting
pieces are short scientific papers, or abridged versions of longer papers; others are
less technical, with thoughts on the past and ideas for the future. We hope that
they will all interest Luca.
Hopefully the videos will be posted soon.
There is an ongoing discussion in LtU (there, and there) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambdacalculus and functional languages. Guy Blelloch and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far.
A portion of the functional programming community has long been of the opinion that we do not need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, selfcontained descriptions of computational behavior, which we can elevate to the status of (functional) machine model  just like "abstract machines" can be seen as just machines.
This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. Guy Blelloch developed a family of cost semantics, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are selfcontained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of twoway correspondences, with the occasional extra logarithmic overhead  or, from another point of view, provided probably costeffective implementations of functional languages in imperative languages and conversely.
This topic has been discussed by Robert Harper in two blog posts, Language and Machines which develops the general argument, and a second post on recent joint work by Guy and him on integrating cacheefficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages".
In chronological order, three papers that are representative of the evolution of this work are the following.
Parallelism In Sequential Functional Languages
Guy E. Blelloch and John Greiner, 1995.
This paper is focused on parallelism, but is also one of the earliest work carefully relating a lambdacalculus cost semantics with several machine models.
This paper formally studies the question of how much parallelism is available in callbyvalue functional languages with no parallel extensions (i.e., the functional subsets of ML or Scheme). In particular we are interested in placing bounds on how much parallelism is available for various problems. To do this we introduce a complexity model, the PAL, based on the callbyvalue lambdacalculus. The model is defined in terms of a profiling semantics and measures complexity in terms of the total work and the parallel depth of a computation. We describe a simulation of the APAL (the PAL extended with arithmetic operations) on various parallel machine models, including the butterfly, hypercube, and PRAM models and prove simulation bounds. In particular the simulations are workefficient (the processortime product on the machines is within a constant factor of the work on the APAL), and for P processors the slowdown (time on the machines divided by depth on the APAL) is proportional to at most O(log P). We also prove bounds for simulating the PRAM on the APAL.
Space Profiling for Functional Programs
Daniel Spoonhower, Guy E. Blelloch, Robert Harper, and Phillip B. Gibbons, 2011 (conference version 2008)
This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.
We present a semantic space profiler for parallel functional programs. Building on previous work in sequential profiling, our tools help programmers to relate runtime resource use back to program source code. Unlike many profiling tools, our profiler is based on a cost semantics. This provides a means to reason about performance without requiring a detailed understanding of the compiler or runtime system. It also provides a specification for language implementers. This is critical in that it enables us to separate cleanly the performance of the application from that of the language implementation. Some aspects of the implementation can have significant effects on performance. Our cost semantics enables programmers to understand the impact of different scheduling policies while hiding many of the details of their implementations. We show applications where the choice of scheduling policy has asymptotic effects on space use. We explain these use patterns through a demonstration of our tools. We also validate our methodology by observing similar performance in our implementation of a parallel extension of Standard ML
Cache and I/O efficient functional algorithms
Guy E. Blelloch, Robert Harper, 2013 (see also the shorter CACM version)
The cost semantics in this last work incorporates more notions from garbage collection, to reason about cacheefficient allocation of values  in that it relies on work on formalizing garbage collection that has been mentioned on LtU before.
The widely studied I/O and idealcache models were developed to account for the large difference in costs to access memory at different levels of the memory hierarchy. Both models are based on a two level memory hierarchy with a fixed size primary memory (cache) of size \(M\), an unbounded secondary memory, and assume unit cost for transferring blocks of size \(B\) between the two. Many algorithms have been analyzed in these models and indeed these models predict the relative performance of algorithms much more accurately than the standard RAM model. The models, however, require specifying algorithms at a very low level requiring the user to carefully lay out their data in arrays in memory and manage their own memory allocation.
In this paper we present a cost model for analyzing the memory efficiency of algorithms expressed in a simple functional language. We show how many algorithms written in standard forms using just lists and trees (no arrays) and requiring no explicit memory layout or memory management are efficient in the model. We then describe an implementation of the language and show provable bounds for mapping the cost in our model to the cost in the idealcache model. These bound imply that purely functional programs based on lists and trees with no special attention to any details of memory layout can be as asymptotically as efficient as the carefully designed imperative I/O efficient algorithms. For example we describe an \(O(\frac{n}{B} \log_{M/B} \frac{n}{B})\) cost sorting algorithm, which is optimal in the ideal cache and I/O models.
Pure Subtype Systems, by DeLesley S. Hutchins:
This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the theory is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined directly over objects. The traditional typing relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality.
Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several problems of practical interest, including mixin modules, virtual classes, and featureoriented programming.
The cost of using pure subtype systems is the complexity of the metatheory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is â€œclose enoughâ€ to rule out obvious counterexamples. We present it as an open problem in type theory.
A thoughtprovoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the sideeffect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types".
Of course, it's not all sunshine and roses. As the abstract explains, the metatheory is quite complicated and soundness is still an open question. Not too surprising considering type checking Type:Type is undecidable.
Hutchins' thesis is also available for a more thorough treatment. This work is all in pursuit of Hitchens' goal of featureoriented programming.
Conor McBride gave an 8lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory:
Dependently typed functional programming languages such as Agda are capable of expressing very precise types for data. When those data themselves encode types, we gain a powerful mechanism for abstracting generic operations over carefully circumscribed universes. This course will begin with a rapid depedentlytyped programming primer in Agda, then explore techniques for and consequences of universe constructions. Of central importance are the â€œpattern functorsâ€ which determine the node structure of inductive and coinductive datatypes. We shall consider syntactic presentations of these functors (allowing operations as useful as symbolic differentiation), and relate them to the more uniform abstract notion of â€œcontainerâ€. We shall expose the doublelife containers lead as â€œinteraction structuresâ€ describing systems of effects. Later, we step up to functors over universes, acquiring the power of inductiverecursive definitions, and we use that power to build universes of dependent types.
The lecture notes, code, and video captures are available online.
As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging.

Recent comments
8 hours 29 min ago
14 hours 5 min ago
14 hours 41 min ago
15 hours 27 sec ago
1 day 7 hours ago
1 day 8 hours ago
2 days 8 hours ago
2 days 8 hours ago
4 days 17 hours ago
4 days 21 hours ago