Type Theory
Benjamin Werner explore a type theory that, among other things, might be a first step towards making a proof assistant that's also a more traditional programming environment in On the Strength of Proof-Irrelevant Type Theories, Logical Methods In Computing 2008.
Abstract. We present a type theory with some proof-irrelevance built into the conversion rule. We argue that this feature is useful when type theory is used as the logical formalism underlying a theorem prover. We also show a close relation with the subset types of the theory of PVS. We show that in these theories, because of the additional extentionality, the axiom of choice implies the decidability of equality, that is, almost classical logic. Finally we describe a simple set-theoretic semantics.
A suggested application lies a few paragraphs in.
In order to identify more terms it thus is tempting to [integrate] program extraction into the formalism so that the conversion rule does not require the computationally irrelevant parts of terms to be convertible.
In what follows, we present and argue in favor of a type-theory along this line. More precisely, we claim that such a feature is useful in at least two respects. For one, it gives a more comfortable type theory, especially in the way it handles equality. Furthermore it is a good starting point to build a platform for programming with dependent types, that is to use the theorem prover also as a programming environment. ...
Functional Pearl: Type-safe pattern combinators, by Morten Rhiger:
Macros still have not made their way into typed higher-order programming languages such as Haskell and Standard ML. Therefore, to extend the expressiveness of Haskell or Standard ML gradually, one must express new linguistic features in terms of functions that fit within the static type systems of these languages. This is particularly challenging when introducing features that span across multiple types and that bind variables. We address this challenge by developing, in a step-by-step manner, mechanisms for encoding patterns and pattern matching in Haskell in a type-safe way.
This approach relies on continuation-passing style for a full encoding of pattern matching. Tullsen's First-Class Patterns relies on a monadic encoding of pattern matching to achieve abstraction over patterns. Given the relationship between CPS and monads, the two approaches likely share an underlying structure.
Abstracting over patterns yields a whole new level of abstraction, which could significantly improve code reuse. The only concern is compiling these more flexible structures to the same efficient pattern matching code we get when the language natively supports patterns. Section 4.9 discusses the efficiency concerns, and suggests that partial evaluation can completely eliminate the overhead.
Type inference for correspondence types
We present a type and effect system for proving correspondence assertions in a Ï€-calculus with polarized channels, dependent pair types and effect terms. Given a process P and a type environment E, we describe how to generate constraints that are formulae in the Alternating Least Fixed-Point (ALFP) logic. A reasonable model of the generated constraints yields a type and effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a ï¬nite model property; a system of constraints is satisï¬able if and only if it has a ï¬nite model. As a consequence, we obtain the result that type and effect inference in our system is polynomial-time decidable.
That's a mouthful. The part of the paper that perked my virtual ears up:
Most importantly, our approach is general in nature; the encoding of types and terms does not depend on the rules of the type system. For this reason, our approach appears a natural candidate for obtaining similar type inference results for type systems such as [9], where correspondences concern formulas in an arbitrary authorization logic and the underlying process calculus includes cryptographic operations, and type systems for secrecy properties such as [12]. The possibility of such ramiï¬cations is currently under investigation.
[9] is A type discipline for authorization policies, which is followed up by Type-checking zero knowledge. The upshot is that it might be possible to have reasonable type inference support for a dependent type- and effect system with cryptographic operations supporting some of the most powerful privacy and security primitives and protocols currently known. I find this very exciting!
Type-Checking Zero Knowledge
This paper presents the ï¬rst type system for statically analyzing security protocols that are based on zero-knowledge proofs. We show how certain properties offered by zero-knowledge proofs can be characterized in terms of authorization policies and statically enforced by a type system. The analysis is modular and compositional, and provides security proofs for an unbounded number of protocol executions. We develop a new type-checker that conducts the analysis in a fully automated manner. We exemplify the applicability of our technique to real-world protocols by verifying the authenticity and secrecy properties of the Direct Anonymous Attestation (DAA) protocol. The analysis of DAA takes less than three seconds.
A little dependent type theory, a little Pi calculus, a little cryptography... there's something for everyone! This is follow-up work to this story that Ehud posted.
The source code can be found here.
Modeling Abstract Types in Modules with Open Existential Types, by Benoît Montagu Didier Rémy:
We propose F¥, a calculus of open existential types that is an extension of System F obtained by decomposing the introduction and elimination of existential types into more atomic constructs. Open existential types model modular type abstraction as done in module systems. The static semantics of F¥ adapts standard techniques to deal with linearity of typing contexts, its dynamic semantics is a small-step reduction semantics that performs extrusion of type abstraction as needed during reduction, and the two are related by subject reduction and progress lemmas. Applying the Curry-Howard isomorphism, F¥ can be also read back as a logic with the same expressive power as second-order logic but with more modular ways of assembling partial proofs. We also extend the core calculus to handle the double vision problem as well as type-level and term-level recursion. The resulting language turns out to be a new formalization of (a minor variant of) Dreyer’s internal language for recursive and mixin modules.
This approach to existentials seems to considerably improve their power and simplify their use. It also brings us one step closer to first-class modules!
Type Checking with Open Type Functions by Tom Schrijvers, Simon Peyton-Jones, Manuel M. T. Chakravarty, and Martin Sulzmann
We report on an extension of Haskell with open type-level functions and equality constraints that unifies earlier work on GADTs, functional dependencies, and associated types. The contribution of the paper is that we identify and characterise the key technical challenge of entailment checking; and we give a novel, decidable, sound, and complete algorithm to solve it, together with some practically-important variants. Our system is implemented in GHC, and is already in active use.
Related to GHC/Type families, which can be an important optimization technique when specializing the types. Type families allow greater control over types. I guess they didn't like having C++ templates having any advantage with associated types. Personally, it reminds me of some of the power afforded by ML parameterized functors.
(via reddit and 2 Minute intro to Associated Types/Type Families).
Lengrand & Miquel (2008). Classical Fω, orthogonality and symmetric candidates. Annals of Pure and Applied Logic 153:3-20.
We present a version of system Fω, called Fω^C, in which the layer of type
constructors is essentially the traditional one of Fω, whereas provability
of types is classical. The proof-term calculus accounting for the classical
reasoning is a variant of Barbanera and Berardi’s symmetric λ-calculus.
We prove that the whole calculus is strongly normalising. For the
layer of type constructors, we use Tait and Girard’s reducibility method
combined with orthogonality techniques. For the (classical) layer of terms,
we use Barbanera and Berardi’s method based on a symmetric notion of
reducibility candidate. We prove that orthogonality does not capture the
fixpoint construction of symmetric candidates.
We establish the consistency of Fω^C, and relate the calculus to the
traditional system Fω, also when the latter is extended with axioms for
classical logic.
A rather amusing thread on the plt-scheme mailing list, reminded me of Wadler's paper Why Calculating is Better than Scheming from 1987, which wasn't discussed here as far as I recall.
It's a bit of a blast from the past, but still worth reading.
Verifiable Functional Purity in Java. Matthew Finifter, Adrian Mettler, Naveen Sastry, and David Wagner.
To appear at 15th ACM Conference on Computer and Communication Security (CCS 2008).
Proving that particular methods within a code base are functionally pure - deterministic and side-effect free - would aid verification of security properties including function invertibility, reproducibility of computation, and safety of untrusted code execution. Until now it has not been possible to automatically prove a method is functionally pure within a high-level imperative language in wide use such as Java. We discuss a technique to prove that methods are functionally pure by writing programs in a subset of Java called Joe-E; a static verifier ensures that programs fall within the subset. In Joe-E, pure methods can be trivially recognized from their method signature.
The paper includes a nice discussion of security benefits that can stem from being able to identify pure functions (of course, it is not obvious that guarantees at the programming language level are maintained at the run time level). I am sure many here have opinions about whether it makes more sense to try to graft purity on an imperative language, exposing it as an added feature, or to move programmers to functional languages..
In Polymorphic Algebraic Data Type Reconstruction, Tom Schrijvers and Maurice Bruynooghe create some magic: in addition to normal type declaration inference, they infer ADT definitions. From the abstract:
One of the disadvantages of statically typed languages is the programming overhead caused by writing all the necessary type information: Both type declarations and type definitions are typically
required. Traditional type inference aims at relieving the programmer from the former.
We present a rule-based constraint rewriting algorithm that reconstructs both type declarations and type definitions, allowing the programmer to effectively program type-less in a strictly typed language.
Is the algorithm guaranteed to terminate? Well...
...theoretically there must be some programs for which it does not terminate. However, as with Henglein’s algorithm, we are not aware of any actual program for which it does not terminate.
We quote Henglein to emphasize that the termination of the inference is only an issue for programs that have problematic types:
. . . why type checking is no more practical than type inference:
there are constructible ML-programs that fit on a
page and are, at least theoretically, well typed, yet writing
their principal types would require more than the number of
atoms in the universe. So writing an explicitly typed version
of the program is impossible to start with.

|
Recent comments
4 weeks 2 days ago
4 weeks 3 days ago
4 weeks 4 days ago
4 weeks 4 days ago
5 weeks 2 days ago
5 weeks 2 days ago
5 weeks 2 days ago
8 weeks 3 days ago
9 weeks 1 day ago
9 weeks 2 days ago