James
H. Fetzer's Program
Verification: The Very Idea (1988) is one of the two most
frequently cited
position papers on the subject of program verification.
The other one is
Social
Processes and Proofs by De Millo, Lipton, and Perlis (1979), previously
discussed
on
LtU. Fetzer's
paper generated a lot of heated discussion, both in the
subsequent issues
of CACM and on
Usenet.
It's not clear to me what all the fuss is about. Fetzer's main
thesis seems pretty uncontroversial:
The notion of program verification appears to trade
upon an equivocation. Algorithms, as logical structures, are
appropriate subjects for deductive verification. Programs, as
causal models of those structures, are not. The success of program
verification as a generally applicable and completely reliable
method for guaranteeing program performance is not even a
theoretical possibility.
(See also part I,
part
II,
and part
III.)
Flexible types: Robust type inference for first-class polymorphism, by Dan Leijen:
We present HML, a type inference system that supports full firstclass polymorphism where few annotations are needed: only function parameters with a polymorphic type need to be annotated. HML is a simplification of MLF where only flexibly quantified types are used. This makes the types easier to work with from a programmers perspective, and we found that it simplifies the implementation of the type inference algorithm significantly. Still, HML retains much of the expressiveness of MLF, it is robust with respect to small program transformations, and has a simple specification of the type rules with an effective type inference algorithm that infers principal types. A reference implementation of the type system is available at: http://research.microsoft.com/users/daan/pubs.html.
Seems there's been a flurry of activity on first-class polymorphism recently, with HML, FPH, and HMF as simpler alternatives to full ML F.
FPH: First-class Polymorphism for Haskell, by Dimitrios Vytiniotis, Stephanie Weirich and Simon Peyton Jones:
Languages supporting polymorphism typically have ad-hoc restrictions on where polymorphic types may occur. Supporting “firstclass†polymorphism, by lifting those restrictions, is obviously desirable, but it is hard to achieve this without sacrificing type inference. We present a new type system for higher-rank and impredicative polymorphism that improves on earlier proposals: it is an extension of Damas-Milner; it relies only on System F types; it has a simple, declarative specification; it is robust to program transformations; and it enjoys a complete and decidable type inference algorithm.
Under Related Work, the authors provide a detailed comparison of their system with MLF, and HMF.
HMF: Simple type inference for first-class polymorphism - Daan Leijen, Draft April 8, 2008.
HMF is a conservative extension of Hindley-Milner type inference with first-class polymorphism and regular System F types. The system
distinguishes itself from other proposals with simple type rules and a very simple type inference algorithm that is just a small extension
of the usual Damas-Milner algorithm. Given the relative simplicity and expressive power, we feel that HMF can be a very attractive
type system in practice. There is a reference implementation of the type system available at: http://research.microsoft.com/users/daan/pubs.html.
An excellent paper even in its current draft form. I also placed this under the learning category, because Daan's writing style is lucid enough that the concepts can be understood by relative newcomers to the field of type theory
The recent discussion around Safe and Secure Software in Ada involved some amount of discussion around what is involved in proving software secure, and what role do PLs play in this. I recommend two papers for further discussion:
- First, Rao & Rohatgi (2001), EMpowering Side-channel attacks, which discusses a fairly new technology for gathering information from running systems by monitoring their EM emissions; and
- Rae & Wildman (2003), A Taxonomy of Attacks on Secure Devices, which provides a synthetic classification of attacks on computer systems based on the attackers degree of access to the machinery and the attacker's objectives, and which catalogues a range of attacks into the classification.
So I hereby advance three slogans:
- Security is physical: neither applications nor operating systems can satisfy elementary security properties, but only deployed computer systems. This is because elementary security properties are about what the attacker does, which ultimately has a physical basis;
- Security is non-modular: Programming languages and software engineering practices can ensure that software possesses properties helpful to security, but the properties are only meaningful in the context of a strategy to ensure a computer system satisfies its security policy;
- We should not talk of secure programming languages and secure programs, such talk does mislead; to talk instead of software being secureable might promote better understanding.
Edited following Dave Griffith's remarks.
Algebra of programming using dependent types. S-C. Mu, H-S. Ko, and P. Jansson.
Dependent type theory is rich enough to express that a program satisfies an input/output relational specification, but it could be hard to construct the proof term. On the other hand, squiggolists know very well how to show that one relation is included in another by algebraic reasoning. We demonstrate how to encode functional and relational derivations in a dependently typed programming language. A program is coupled with an algebraic derivation from a specification, whose correctness is guaranteed by the type system.
Code accompanying the paper has been developed into an Agda library AoPA.
Games industry curmudgeon and interactive storytelling proponent Chris Crawford spoke at the Game Developers Exchange conference here in Atlanta yesterday. As a part of the talk he explained the "Nine Breakthroughs" that were important to his work on Storytron. I recorded them here...
Surprisingly, many of the ideas are about languages - either the programming languages used to program the games, or the language interfaces provided to users.
Register Allocation by Proof Transformation, Atsushi Ohori. ESOP 2003.
This paper presents a proof-theoretical framework for register allocation that accounts for the entire process of register allocation. Liveness analysis is proof reconstruction (similar to type inference), and register allocation is proof transformation from a proof system with unrestricted variable accesses to the one with restricted variable access. In our framework, the set of registers acts as the ``working set'' of the live variables at each instruction step, which changes during the execution of the code. This eliminates the ad-hoc notion of ``spilling''. The necessary memory-register moves are systematically incorporated in the proof transformation process. Its correctness is a simple corollary of our construction; the resulting proof is equivalent to the proof of the original code modulo the treatment of structural rules. This yields a simple yet powerful register allocation algorithm. The algorithm has been implemented, demonstrating the feasibility of the framework.
The idea that making uses of the structural rules explicit gives you proof terms to model register-memory moves is very pretty. Another fun thing to do would be to take a continuation calculus and apply the ideas here to see if it produces a good low-level IR.
EDIT: Ehud asked for some further exposition, so here goes.
At a high level, you can think of the need to do register allocation as arising from a mismatch between a programming language and the hardware. In a language, we refer to data using variables, and in any given expression, we can use as many variables as we like. However, when a CPU does stuff, it wants the data to be in registers -- and it has only a small, finite set of them. So when a program is compiled, some variables can be represented by registers, and the rest must be represented by locations in memory (usually on the stack). Whenever the CPU needs to use a variable in memory, there must be explicit code to move it from memory into a register.
The idea in this paper is to take the typing derivation of a program with an unbounded variable set, and then divide the context into two parts, one representing the register file and the other representing variables in memory. In terms of the implementation, moves between these two zones correspond to register-memory moves; and in terms of logic, this is a use of the structural rule of Exchange, which permutes the order of variables in a context.
So this gives us a high-level, machine-independent characterization of the register allocation problem: take a one-zone derivation and convert it to a two-zone derivation. But it gets even better: as long as the register allocation algorithm only adds uses of the structural rules in its transformation, we know that the meaning of the original program is unchanged -- so this method also yields a simple way of proving that a register allocation pass is semantics-preserving. (The fact that this is an easy proof is one indication of the power of this idea.)
Christian Urban, James Cheney, Stefan Berghofer. Mechanizing the Metatheory of LF. Extended tech report accompanying LICS 2008 paper.
LF is a dependent type theory in which many other formal systems can be conveniently embedded. However, correct use of LF relies on nontrivial metatheoretic developments such as proofs of correctness of decision procedures for LF's judgments. Although detailed informal proofs of these properties have been published, they have not been formally verified in a theorem prover. We have formalized these properties within Isabelle/HOL using the Nominal Datatype Package, closely following a recent article by Harper and Pfenning. In the process, we identified and resolved a gap in one of the proofs and a small number of minor lacunae in others. Besides its intrinsic interest, our formalization provides a foundation for studying the adequacy of LF encodings, the correctness of Twelf-style metatheoretic reasoning, and the metatheory of extensions to LF.
This paper is a substantial exercise in applying the approach of Harper and Licata in Mechanizing Language Definitions reported before, and is an advance towards a worthy objective in the campaign to mechanise metatheory that Paul espouses...
Over at Russ Cox's blog:
In 1997, on his retirement from Bell Labs, Doug McIlroy gave a fascinating talk about the “History of Computing at Bell Labs†...
My favorite parts of the talk are the description of the bi-quinary decimal relay calculator and the description of a team that spent over a year tracking down a race condition bug in a missile detector (reliability was king: today you'd just stamp “cannot reproduce†and send the report back). But the whole thing contains many fantastic stories. It's well worth the read or listen. I also like his recollection of programming using cards: “It's the kind of thing you can be nostalgic about, but it wasn't actually fun.â€
|
Recent comments
3 weeks 5 days ago
3 weeks 6 days ago
4 weeks 22 hours ago
4 weeks 23 hours ago
4 weeks 5 days ago
4 weeks 6 days ago
4 weeks 6 days ago
7 weeks 6 days ago
8 weeks 5 days ago
8 weeks 5 days ago