Lambda the Ultimate - Type Theory
http://lambda-the-ultimate.org/taxonomy/term/21/0
enContextual isomorphisms
http://lambda-the-ultimate.org/node/5398
<p >
<a href="http://www.cs.bham.ac.uk/~pbl/papers/contextiso.pdf">Contextual Isomorphisms</a><br >
Paul Blain Levy<br >
2017<br >
</p>
<blockquote >
<p >
What is the right notion of "isomorphism" between types, in a simple
type theory? The traditional answer is: a pair of terms that are
inverse, up to a specified congruence. We firstly argue that, in the
presence of effects, this answer is too liberal and needs to be
restricted, using Führmann’s notion of thunkability in the case of
value types (as in call-by-value), or using Munch-Maccagnoni’s notion
of linearity in the case of computation types (as in
call-by-name). Yet that leaves us with different notions of
isomorphism for different kinds of type.
</p>
<p >
This situation is resolved by means of a new notion of “contextual”
isomorphism (or morphism), analogous at the level of types to
contextual equivalence of terms. A contextual morphism is a way of
replacing one type with the other wherever it may occur in
a judgement, in a way that is preserved by the action of any term with
holes. For types of pure λ-calculus, we show that a contextual
morphism corresponds to a traditional isomorphism. For value types,
a contextual morphism corresponds to a thunkable isomorphism, and for
computation types, to a linear isomorphism.
</p>
</blockquote>
<p >This paper is based on a very simple idea that everyone familiar
with type-systems can enjoy. It then applies it in a technical setting
in which it brings a useful contribution. I suspect that many readers
will find that second part too technical, but they may still enjoy the
idea itself. To facilitate this, I will rephrase the abstract above in
a way that I hope makes it accessible to a larger audience.</p>
<p >The problem that the paper solves is: how do we know what it means
for two types to be equivalent? For many languages they are reasonable
definitions of equivalence (such that: there exists a bijection
between these two types in the language), but for more advanced
languages these definitions break down. For example, in presence of
hidden mutable state, one can build a pair of functions from the
one-element type <code >unit</code> to the two-element
type <code >bool</code> and back that are the identity when composed
together -- the usual definition of bijection -- while these two types
should probably not be considered "the same". Those two functions
share some hidden state, so they "cheat". Other, more complex notions
of type equivalence have been given in the literature, but how do we
know whether they are the "right" notions, or whether they may
disappoint us in the same way?</p>
<p >To define what it means for two program fragments to be equivalent,
we have a "gold standard", which is contextual equivalence: two
program fragments are contextually equivalent if we can replace one
for the other in any complete program without changing its
behavior. This is simple to state, it is usually clear how to
instantiate this definition for a new system, and it gives you
a satisfying notion of equivalent. It may not be the most convenient
one to work with, so people define others, more specific notions of
equivalence (typically beta-eta-equivalence or logical relations); it
is fine if they are more sophisticated, and their definiton harder to
justify or understand, because they can always be compared to this
simple definition to gain confidence.</p>
<p >The simple idea in the paper above is to use this exact same trick
to define what it means for two <em >types</em> to be
equivalent. Naively, one could say that two types are equivalent if,
in any well-typed program, one can replace some occurrences of the
first type by occurrences of the second type, all other things being
unchanged. This does not quite work, as changing the types that appear
in a program without changing its terms would create ill-typed
terms. So instead, the paper proposes that two types are equivalent
when we are told how to transform any program using the first type
into a program using the second type, in a way that is bijective
(invertible) and compositional -- see the paper for details.</p>
<p >Then, the author can validate this definition by showing that, when
instantiated to languages (simple or complex) where existing notions
of equivalence have been proposed, this new notion of equivalence
corresponds to the previous notions.</p>
<p >(Readers may find that even the warmup part of the paper, namely
the sections 1 to 4, pages 1 to 6, are rather dense, with a compactly
exposed general idea and arguably a lack of concrete examples that would help
understanding. Surely this terseness is in large part a consequence of
strict page limits -- conference articles are the tweets of computer
science research. A nice side-effect (no pun intended) is that you can
observe a carefully chosen formal language at work, designed to expose
the setting and perform relevant proofs in minimal space: category
theory, and in particular the concept of naturality, is the killer
space-saving measure here.)</p>Type TheoryFri, 09 Dec 2016 21:37:50 +0000Polymorphism, subtyping and type inference in MLsub
http://lambda-the-ultimate.org/node/5393
<p >I am very enthusiastic about the following paper: it brings new ideas and solves a problem that I did not expect to be solvable, namely usable type inference when both polymorphism and subtyping are implicit. (By "usable" here I mean that the inferred types are both compact and principal, while previous work generally had only one of those properties.)</p>
<p >
<a href="http://www.cl.cam.ac.uk/%7Esd601/papers/mlsub-preprint.pdf">Polymorphism, Subtyping, and Type Inference in MLsub</a><br >
Stephen Dolan and Alan Mycroft<br >
2017</p>
<blockquote >
<p >We present a type system combining subtyping and ML-style parametric polymorphism. Unlike previous work, our system supports
type inference and has compact principal types. We demonstrate
this system in the minimal language MLsub, which types a strict
superset of core ML programs.</p>
<p >This is made possible by keeping a strict separation between
the types used to describe inputs and those used to describe outputs, and extending the classical unification algorithm to handle
subtyping constraints between these input and output types. Principal types are kept compact by type simplification, which exploits
deep connections between subtyping and the algebra of regular languages. An implementation is available online.</p>
</blockquote>
<p >The paper is full of interesting ideas. For example, one idea is that adding type variables to the base grammar of types -- instead of defining them by their substitution -- forces us to look at our type systems in ways that are more open to extension with new features. I would recommend looking at this paper even if you are interested in ML and type inference, but not subtyping, or in polymorphism and subtyping, but not type inference, or in subtyping and type inference, but not functional languages.</p>
<p >This paper is also a teaser for the first's author PhD thesis, <a href="https://www.cl.cam.ac.uk/~sd601/thesis.pdf"><em >Algebraic Subtyping</em></a>. There is also an <a href="https://www.cl.cam.ac.uk/~sd601/mlsub/">implementation</a> available.</p>
<p >(If you are looking for interesting work on inference of polymorphism and subtyping in object-oriented languages, I would recommend <a href="http://www.cs.cornell.edu/~ross/publications/shapes/">Getting F-Bounded Polymorphism into Shape</a> by Ben Greenman, Fabian Muehlboeck and Ross Tate, 2014.)</p>Type TheoryWed, 23 Nov 2016 16:54:15 +0000Proving Programs Correct Using Plain Old Java Types
http://lambda-the-ultimate.org/node/5387
<p ><a href="http://fpl.cs.depaul.edu/jriely/papers/2009-pojt.pdf">Proving Programs Correct Using Plain Old Java Types</a>, by Radha Jagadeesan, Alan Jeffrey, Corin Pitcher, James Riely:</p>
<blockquote ><p >Tools for constructing proofs of correctness of programs have a long history of development in the research community, but have often faced difficulty in being widely deployed in software development tools. In this paper, we demonstrate that the off-the-shelf Java type system is already powerful enough to encode non-trivial proofs of correctness using propositional Hoare preconditions and postconditions.</p>
<p >We illustrate the power of this method by adapting Fähndrich and Leino’s work on monotone typestates and Myers and Qi’s closely related work on object initialization. Our approach is expressive enough to address phased initialization protocols and the creation of cyclic data structures, thus allowing for the elimination of null and the special status of constructors. To our knowledge, our system is the first that is able to statically validate standard one-pass traversal algorithms for cyclic graphs, such as those that underlie object deserialization. Our proof of correctness is mechanized using the Java type system, without any extensions to the Java language.</p></blockquote>
<p >Not a new paper, but it provides a lightweight verification technique for some program properties that you can use right now, without waiting for integrated theorem provers or SMT solvers. Properties that require only monotone typestates can be verified, ie. those that operations can only move the typestate "forwards".</p>
<p >In order to achieve this, they require programmers to follow a few simple rules to avoid Java's pervasive nulls. These are roughly: don't assign null explicitly, be sure to initialize all fields when constructing objects.</p>OOPType TheoryFri, 04 Nov 2016 14:27:32 +0000Fully Abstract Compilation via Universal Embedding
http://lambda-the-ultimate.org/node/5364
<p ><a href="https://www.williamjbowman.com/resources/fabcc-paper.pdf">Fully Abstract Compilation via Universal Embedding</a> by Max S. New, William J. Bowman, and Amal Ahmed:</p>
<blockquote ><p >A <em >fully abstract</em> compiler guarantees that two source components are observationally equivalent in the source language if and only if their translations are observationally equivalent in the target. Full abstraction implies the translation is secure: target-language attackers can make no more observations of a compiled component than a source-language attacker interacting with the original source component. Proving full abstraction for realistic compilers is challenging because realistic target languages contain features (such as control effects) unavailable in the source, while proofs of full abstraction require showing that every target context to which a compiled component may be linked can be back-translated to a behaviorally equivalent source context.</p>
<p >We prove the first full abstraction result for a translation whose target language contains exceptions, but the source does not. Our translation—specifically, closure conversion of simply typed λ-calculus with recursive types—uses types at the target level to ensure that a compiled component is never linked with attackers that have more distinguishing power than source-level attackers. We present a new back-translation technique based on a deep embedding of the target language into the source language at a dynamic type. Then boundaries are inserted that mediate terms between the untyped embedding and the strongly-typed source. This technique allows back-translating non-terminating programs, target features that are untypeable in the source, and well-bracketed effects.</p></blockquote>
<p >Potentially a promising step forward to secure multilanguage runtimes. We've previously discussed security vulnerabilities caused by full abstraction failures <a href="http://lambda-the-ultimate.org/node/3830">here</a> and <a href="http://lambda-the-ultimate.org/node/1588">here</a>. The paper also provides a comprehensive review of associated literature, like various means of protection, back translations, embeddings, etc.</p>Lambda CalculusSemanticsTheoryType TheoryWed, 27 Jul 2016 15:57:02 +0000Set-Theoretic Types for Polymorphic Variants
http://lambda-the-ultimate.org/node/5351
<p ><a href="http://arxiv.org/pdf/1606.01106v1.pdf">Set-Theoretic Types for Polymorphic Variants</a> by Giuseppe Castagna, Tommaso Petrucciani, and Kim Nguyễn:</p>
<blockquote ><p >Polymorphic variants are a useful feature of the OCaml language whose current definition and implementation rely on kinding constraints to simulate a subtyping relation via unification. This yields an awkward formalization and results in a type system whose behaviour is in some cases unintuitive and/or unduly restrictive.</p>
<p >In this work, we present an alternative formalization of polymorphic variants, based on set-theoretic types and subtyping, that yields a cleaner and more streamlined system. Our formalization is more expressive than the current one (it types more programs while preserving type safety), it can internalize some meta-theoretic properties, and it removes some pathological cases of the current implementation resulting in a more intuitive and, thus, predictable type system. More generally, this work shows how to add full-fledged union types to functional languages of the ML family that usually rely on the Hindley-Milner type system. As an aside, our system also improves the theory of semantic subtyping, notably by proving completeness for the type reconstruction algorithm.</p></blockquote>
<p >Looks like a nice result. They integrate union types and restricted intersection types for complete type inference, which prior work on CDuce could not do. The disadvantage is that it does not admit principal types, and so inference is non-deterministic (see section 5.3.2).</p>FunctionalType TheoryThu, 09 Jun 2016 13:38:24 +0000No value restriction is needed for algebraic effects and handlers
http://lambda-the-ultimate.org/node/5343
<p ><a href="https://arxiv.org/pdf/1605.06938v1.pdf">No value restriction is needed for algebraic effects and handlers</a>, by Ohad Kammar and Matija Pretnar:</p>
<blockquote ><p >We present a straightforward, sound Hindley-Milner polymorphic type system for algebraic effects and handlers in a call-by-value calculus, which allows type variable generalisation of arbitrary computations, not just values. This result is surprising. On the one hand, the soundness of unrestricted call-by-value Hindley-Milner polymorphism is known to fail in the presence of computational effects such as reference cells and continuations. On the other hand, many programming examples can be recast to use effect handlers instead of these effects. Analysing the expressive power of effect handlers with respect to state effects, we claim handlers cannot express reference cells, and show they can simulate dynamically scoped state.</p></blockquote>
<p >Looks like a nice integration of algebraic effects with simple Hindly-Milner, but which yields some unintuitive conclusions. At least I certainly found the possibility of supporting dynamically scoped state but not reference cells surprising!</p>
<p >It highlights the need for some future work to support true reference cells, namely a polymorphic type and effect system to generate fresh instances.</p>FunctionalType TheoryWed, 25 May 2016 13:54:59 +0000Type Checking Modular Multiple Dispatch with Parametric Polymorphism and Multiple Inheritance
http://lambda-the-ultimate.org/node/5322
<p ><a href="http://www.mpi-sws.org/~skilpat/papers/multipoly.pdf">Type Checking Modular Multiple Dispatch with Parametric Polymorphism and Multiple Inheritance</a> by Eric Allen, Justin Hilburn, Scott Kilpatrick, Victor Luchangco, Sukyoung Ryu, David Chase, Guy L. Steele Jr.:</p>
<blockquote ><p >
In previous work, we presented rules for defining overloaded functions that ensure type safety under symmetric multiple dispatch in an object-oriented language with multiple inheritance, and we showed how to check these rules without requiring the entire type hierarchy to be known, thus supporting modularity and extensibility. In this work, we extend these rules to a language that supports parametric polymorphism on both classes and functions.</p>
<p >In a multiple-inheritance language in which any type may be extended by types in other modules, some overloaded functions that might seem valid are correctly rejected by our rules. We explain how these functions can be permitted in a language that additionally supports an exclusion relation among types, allowing programmers to declare “nominal exclusions” and also implicitly imposing exclusion among different instances of each polymorphic type. We give rules for computing the exclusion relation, deriving many type exclusions from declared and implicit ones.</p>
<p >We also show how to check our rules for ensuring the safety of overloaded functions. In particular, we reduce the problem of handling parametric polymorphism to one of determining subtyping relationships among universal and existential types. Our system has been implemented as part of the open-source Fortress compiler.
</p></blockquote>
<p ><a href="http://lambda-the-ultimate.org/node/4570">Fortress</a> was briefly covered here a couple of times, as were multimethods and <a href="http://lambda-the-ultimate.org/node/3061">multiple dispatch</a>, but this paper really generalizes and nicely summarizes <a href="http://web.cs.ucla.edu/~todd/research/iandc.pdf">previous work</a> on statically typed modular multimethods, and does a good job explaining the typing rules in an accessible way. The integration with parametric polymorphism I think is key to applying multimethods in other domains which may want modular multimethods, but not multiple inheritance.</p>
<p >The <a href="http://www.cs.yale.edu/homes/jieung/Publication/cpp_paper.pdf">Formalization in COQ</a> might also be of interest to some.</p>
<p >Also, another interesting point is Fortress' use of second-class intersection and union types to simplify type checking.</p>Object-FunctionalTheoryType TheoryFri, 01 Apr 2016 01:25:22 +0000Breaking Through the Normalization Barrier: A Self-Interpreter for F-omega
http://lambda-the-ultimate.org/node/5276
<p ><a href="http://compilers.cs.ucla.edu/popl16/popl16-full.pdf">Breaking Through the Normalization Barrier: A Self-Interpreter for F-omega</a>, by Matt Brown and Jens Palsberg:</p>
<blockquote ><p >According to conventional wisdom, a self-interpreter for a strongly normalizing λ-calculus is impossible. We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible. In this paper we break through the normalization barrier and define a self-interpreter for System Fω, a strongly normalizing λ-calculus. After a careful analysis of the classical theorem, we show that static type checking in Fω can exclude the proof’s diagonalization gadget, leaving open the possibility for a self-interpreter. Along with the self-interpreter, we program four other operations in Fω, including a continuation-passing style transformation. Our operations rely on a new approach to program representation that may be useful in theorem provers and compilers.</p></blockquote>
<p >I haven't gone through the whole paper, but their claims are compelling. They have created self-interpreters in System F, System Fω and System Fω+, which are all strongly normalizing typed languages. Previously, the only instance of this for a typed language was <a href="http://lambda-the-ultimate.org/node/5176">Girard's System U</a>, which is not strongly normalizing. The key lynchpin appears in this paragraph on page 2:</p>
<blockquote ><p >Our result breaks through the normalization barrier. The conventional wisdom underlying the normalization barrier makes an implicit assumption that all representations will behave like their counterpart in the computability theorem, and therefore the theorem must apply to them as well. This assumption excludes other notions of representation, about which the theorem says nothing. Thus, our result does not contradict the theorem, but shows that the theorem is less far-reaching than previously thought.</p></blockquote>
<p >Pretty cool if this isn't too complicated in any given language. Could let one move some previously non-typesafe runtime features, into type safe libraries.</p>FunctionalTheoryType TheoryTue, 10 Nov 2015 14:23:32 +0000Dependent Types for Low-Level Programming
http://lambda-the-ultimate.org/node/5256
<p ><a href="http://www.cs.berkeley.edu/~necula/Papers/deputy-esop07.pdf">Dependent Types for Low-Level Programming</a> by Jeremy Condit, Matthew Harren, Zachary Anderson, David Gay, and George C. Necula:</p>
<blockquote ><p >In this paper, we describe the key principles of a dependent type system for low-level imperative languages. The major contributions of this work are (1) a sound type system that combines dependent types and mutation for variables and for heap-allocated structures in a more flexible way than before and (2) a technique for automatically inferring dependent types for local variables. We have applied these general principles to design Deputy, a dependent type system for C that allows the user to describe bounded pointers and tagged unions. Deputy has been used to annotate and check a number of real-world C programs.</p></blockquote>
<p >A conceptually simple approach to verifying the safety of C programs, which proceeeds in 3 phases: 1. infer locals that hold pointer bounds, 2. flow-insensitive checking introduces runtime assertions using these locals, 3. flow-sensitive optimization that removes the assertions that it can prove always hold.</p>
<p >You're left with a program that ensures runtime safety with as few runtime checks as possible, and the resulting C program is compiled with gcc which can perform its own optimizations.</p>
<p >This work is from 2007, and the project grew into the <a href="http://ivy.cs.berkeley.edu/ivywiki/index.php/Main/HomePage">Ivy language</a>, which is a C dialect that is fully backwards compatible with C if you #include a small header file that includes the extensions.</p>
<p >It's application to C probably won't get much uptake at this point, but I can see this as a useful compiler plugin to verify unsafe Rust code.</p>TheoryType TheoryMon, 28 Sep 2015 13:58:58 +0000Freer Monads, More Extensible Effects
http://lambda-the-ultimate.org/node/5244
<p ><a href="http://okmij.org/ftp/Haskell/extensible/more.pdf">Freer Monads, More Extensible Effects</a>, by Oleg Kiselyov and Hiromi Ishii:</p>
<blockquote ><p >We present a rational reconstruction of extensible effects, the recently proposed alternative to monad transformers, as the confluence of efforts to make effectful computations compose. Free monads and then extensible effects emerge from the straightforward term representation of an effectful computation, as more and more boilerplate is abstracted away. The generalization process further leads to freer monads, constructed without the Functor constraint.</p>
<p >The continuation exposed in freer monads can then be represented as an efficient type-aligned data structure. The end result is the algorithmically efficient extensible effects library, which is not only more comprehensible but also faster than earlier implementations. As an illustration of the new library, we show three surprisingly simple applications: non-determinism with committed choice (LogicT), catching IO exceptions in the presence of other effects, and the semi-automatic management of file handles and other resources through monadic regions.</p>
<p >We extensively use and promote the new sort of ‘laziness’, which underlies the left Kan extension: instead of performing an operation, keep its operands and pretend it is done.</p></blockquote>
<p >This looks very promising, and includes some benchmarks comparing the heavily optimized and special-cased monad transformers against this new formulation of extensible effects using Freer monads.</p>
<p >See also the <a href="https://www.reddit.com/r/haskell/comments/3joxd7/freer_monads_more_extensible_effects/">reddit discussion</a>.</p>FunctionalTheoryType TheorySat, 05 Sep 2015 14:30:02 +0000Self-Representation in Girard’s System U
http://lambda-the-ultimate.org/node/5176
<p ><a href="http://compilers.cs.ucla.edu/popl15/popl15-full.pdf">Self-Representation in Girard’s System U</a>, by Matt Brown and Jens Palsberg:</p>
<blockquote ><p >In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F “seems to be impossible”, but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress?</p>
<p >We show that it is not and present a typed self-representation for Girard’s System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to “tie the knot” – they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation – the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.</p></blockquote>
<p >Typed self-representation has <a href="http://lambda-the-ultimate.org/node/2438#comment-47966">come up</a> here on LtU <a href="http://lambda-the-ultimate.org/node/4785#comment-76876">in the past</a>. I believe the best self-interpreter available prior to this work was a variant of <a href="http://lambda-the-ultimate.org/node/3993">Barry Jay's SF-calculus</a>, covered in the paper <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.225.1765">Typed Self-Interpretation by Pattern Matching</a> (and more fully developed in <a href="http://www0.cs.ucl.ac.uk/staff/R.Rowe/docs/arb10.pdf">Structural Types for the Factorisation Calculus</a>). These covered statically typed self-interpreters without resorting to undecidable type:type rules.</p>
<p >However, being combinator calculi, they're not very similar to most of our programming languages, and so self-interpretation was still an active problem. Enter Girard's System U, which features a more familiar type system with only kind * and kind-polymorphic types. However, System U is not strongly normalizing and is inconsistent as a logic. Whether self-interpretation can be achieved in a strongly normalizing language with decidable type checking is still an open problem.</p>FunctionalLambda CalculusTheoryType TheoryThu, 11 Jun 2015 18:45:11 +0000Second-order logic explained in plain English
http://lambda-the-ultimate.org/node/5170
<p >John Corcoran, <a href="https://www.academia.edu/11975482/Second-order_logic_explained_in_plain_English_">Second-order logic explained in plain English</a>, in <i >Logic, Meaning and Computation: Essays in Memory of Alonzo Church</i>, ed. Anderson and Zelëny.</p>
<p >There is something a little bit Guy Steele-ish about trying to explain the fundamentals of second-order logic (SOL, the logic that Quine branded as set theory in sheep's clothing) and its model theory while avoiding any formalisation. This paper introduces the ideas of SOL via looking at logics with finite, countable and uncountable models, and then talks about FOL and SOL as being complementary approaches to axiomatisation that are each deficient by themself. He ends with a plea for SOL as being an essential tool at least as a heuristic.</p>Type TheoryThu, 28 May 2015 20:18:52 +0000The Next Stage of Staging
http://lambda-the-ultimate.org/node/5127
<p ><a href="http://okmij.org/ftp/meta-programming/StagingNG.pdf">The Next Stage of Staging</a>, by Jun Inoue, Oleg Kiselyov, Yukiyoshi Kameyama:</p>
<blockquote ><p >This position paper argues for type-level metaprogramming, wherein types and type declarations are generated in addition to program terms. Term-level metaprogramming, which allows manipulating expressions only, has been extensively studied in the form of staging, which ensures static type safety with a clean semantics with hygiene (lexical scoping). However, the corresponding development is absent for type manipulation. We propose extensions to staging to cover ML-style module generation and show the possibilities they open up for type specialization and overhead-free parametrization of data types equipped with operations. We outline the challenges our proposed extensions pose for semantics and type safety, hence offering a starting point for a long-term program in the next stage of staging research. The key observation is that type declarations do not obey scoping rules as variables do, and that in metaprogramming, types are naturally prone to escaping the lexical environment in which they were declared. This sets next-stage staging apart from dependent types, whose benefits and implementation mechanisms overlap with our proposal, but which does not deal with type-declaration generation. Furthermore, it leads to an interesting connection between staging and the logic of definitions, adding to the study’s theoretical significance.</p></blockquote>
<p >A position paper describing the next logical progression of staging to metaprogramming over types. Now with the true first-class modules of <a href="http://lambda-the-ultimate.org/node/5121">1ML</a>, perhaps there's a clearer way forward.</p>FunctionalMeta-ProgrammingTheoryType TheorySun, 29 Mar 2015 13:34:48 +0000Conservation laws for free!
http://lambda-the-ultimate.org/node/5078
<p >In this year's <A href="http://popl.mpi-sws.org/2014/">POPL</A>, <A href="bentnib.org">Bob Atkey</A> made a splash by showing how to get <A href="http://bentnib.org/conservation-laws.pdf">from parametricity to conservation laws, via Noether's theorem</A>:</p>
<blockquote ><p >
Invariance is of paramount importance in programming languages and in physics. In programming languages, John Reynolds’ theory of relational parametricity demonstrates that parametric polymorphic programs are invariant under change of data representation, a property that yields “free” theorems about programs just from their types. In physics, Emmy Noether showed that if the action of a physical system is invariant under change of coordinates, then the physical system has a conserved quantity: a quantity that remains constant for all time. Knowledge of conserved quantities can reveal deep properties of physical systems. For example, the conservation of energy, which by Noether’s theorem is a consequence of a system’s invariance under time-shifting.</p>
<p > In this paper, we link Reynolds’ relational parametricity with Noether’s theorem for deriving conserved quantities. We propose an extension of System Fω with new kinds, types and term constants for writing programs that describe classical mechanical systems in terms of their Lagrangians. We show, by constructing a relationally parametric model of our extension of Fω, that relational parametricity is enough to satisfy the hypotheses of Noether’s theorem, and so to derive conserved quantities for free, directly from the polymorphic types of Lagrangians expressed in our system.
</p></blockquote>Category TheoryFunFunctionalLambda CalculusScientific ProgrammingSemanticsTheoryType TheoryTue, 28 Oct 2014 07:52:46 +0000Luca Cardelli Festschrift
http://lambda-the-ultimate.org/node/5044
<p >Earlier this week Microsoft Research Cambridge organised a <A href="http://research.microsoft.com/en-us/events/lucacardellifest/">Festschrift</A> for Luca Cardelli. The preface from the <A href="http://research.microsoft.com/pubs/226237/Luca-Cardelli-Fest-MSR-TR-2014-104.pdf">book</A>:</p>
<blockquote ><p >
Luca Cardelli has made exceptional contributions to the world of programming<br >
languages and beyond. Throughout his career, he has re-invented himself every<br >
decade or so, while continuing to make true innovations. His achievements span<br >
many areas: software; language design, including experimental languages;<br >
programming language foundations; and the interaction of programming languages<br >
and biology. These achievements form the basis of his lasting scientific leadership<br >
and his wide impact.<br >
...<br >
Luca is always asking "what is new", and is always looking to<br >
the future. Therefore, we have asked authors to produce short pieces that would<br >
indicate where they are today and where they are going. Some of the resulting<br >
pieces are short scientific papers, or abridged versions of longer papers; others are<br >
less technical, with thoughts on the past and ideas for the future. We hope that<br >
they will all interest Luca.
</p></blockquote>
<p >Hopefully the videos will be posted soon.</p>Category TheoryLambda CalculusMisc BooksSemanticsTheoryType TheoryFri, 12 Sep 2014 10:10:08 +0000