archives

Formal methods for safety critical systems

On the safety critical mailing list an interesting discussion is taking place, about formal methods. Me, having interest in them, was surprised by the following statement of Nancy Leveson:

Virtually every software-related accident I have heard about in the past 30 years (and I've seen a lot of them) occurred while the software was "performing as specified."

http://www.cs.york.ac.uk/hise/safety-critical-archive/2009/0310.html

(Ariane 5 was not among them, then)

She also writes:

There is a reason that formal methods have not been widely adopted for the past 40 years that academics have been telling practitioners they must use them. Before they are used, the researchers will have to provide:
1. Languages and tools that can be used by a wide range of engineers with only a bachelor's degree. They
cannot be usable only by a small group of Ph.D. mathematicians.
2. Extentions of the techniques to the things they don't cover. Formal methods have been very oversold. They do not handle many of the most important problems.
3. Careful demonstration and experimentation to show that formal methods are less error-prone and more effective than less formal methods for the hardest parts of the problem -- not just the parts that they work best on.

http://www.cs.york.ac.uk/hise/safety-critical-archive/2009/0321.html

and

My students were very fine mathematicians. But they were not pilots or engineers--it required the latter to find the errors my students could not. Formal methods experts cannot and should not be "gurus" outside their field. Most software-related accidents are caused by errors in the requirements specification -- not typos or simple things to find, but misunderstanding basic things about the controlled system and the behavior of the computer. That requires a domain expert, not a mathematician.

http://www.cs.york.ac.uk/hise/safety-critical-archive/2009/0325.html

So the aim is to have formal languages that domain experts do understand. Which of your favourite FM languages stands this test?

Some of them are based on LISP notation which is very compsci-biased. Still, these systems (PVS, ACL2) are used mostly in actual verifications. Isabelle and Coq have more user-friendly notation but it is a question whether these can bridge the gap. Some might call them "cryptic".

What are your favourite languages in this sense?

Observational Equality, Now!

In Observational Equality, Now! Thorsten Altenkirch, Conor McBride, and Wouter Swierstra have

something new and positive to say about propositional equality in programming and proof systems based on the Curry-Howard correspondence between propositions and types. We have found a way to present a propositional equality type

  • which is substitutive, allowing us to reason by replacing equal for equal in propositions;

  • which reflects the observable behaviour of values rather than their construction: in particular, we have extensionality — functions are equal if they take equal inputs to equal outputs;
  • which retains strong normalisation, decidable typechecking and canonicity — the property that closed normal forms inhabiting datatypes have canonical constructors;
  • which allows inductive data structures to be expressed in terms of a standard characterisation of well-founded trees;
  • which is presented syntactically — you can implement it directly, and we are doing so—this approach stands at the core of Epigram 2;
  • which you can play with now: we have simulated our system by a shallow embedding in Agda 2, shipping as part of the standard examples package for that system [21]. Until now, it has always been necessary to sacrifice some of these aspects. The closest attempt in the literature is Al- tenkirch’s construction of a setoid-model for a system with canon- icity and extensionality on top of an intensional type theory with proof-irrelevant propositions [4]. Our new proposal simplifies Altenkirch’s construction by adopting McBride’s heterogeneous ap- proach to equality.

Certified Web Services in Ynot

Certified Web Services in Ynot

In this paper we demonstrate that it is possible to implement certified web systems in a way not much different from writing Standard ML or Haskell code, including use of imperative features like pointers, files, and socket I/O. We present a web-based course gradebook application developed with Ynot, a Coq library for certified imperative programming. We add a dialog-based I/O system to Ynot, and we extend Ynot’s underlying Hoare logic with event traces to reason about I/O behavior. Expressive abstractions allow the modular certification of both high level specifications like privacy guarantees and low level properties like memory safety and correct parsing.

Ynot, always ambitious, takes another serious swing: extracting a real web application from a proof development. In some respects the big news here is the additional coverage that Ynot now offers in terms of support for file and socket I/O, and the event trace mechanism. But there's even bigger news, IMHO, which is the subject of another paper that warrants a separate post.

Effective Interactive Proofs for Higher-Order Imperative Programs

Effective Interactive Proofs for Higher-Order Imperative Programs

We present a new approach for constructing and verifying higher-order, imperative programs using the Coq proof assistant. We build on the past work on the Ynot system, which is based on Hoare Type Theory. That original system was a proof of concept, where every program verification was accomplished via laborious manual proofs, with much code devoted to uninteresting low-level details. In this paper, we present a re-implementation of Ynot which makes it possible to implement fully-verified, higher-order imperative programs with reasonable proof burden. At the same time, our new system is implemented entirely in Coq source files, showcasing the versatility of that proof assistant as a platform for research on language design and verification.

Both versions of the system have been evaluated with case studies in the verification of imperative data structures, such as hash tables with higher-order iterators. The verification burden in our new system is reduced by at least an order of magnitude compared to the old system, by replacing manual proof with automation. The core of the automation is a simplification procedure for implications in higher-order separation logic, with hooks that allow programmers to add domain-specific simplification rules.

We argue for the effectiveness of our infrastructure by verifying a number of data structures and a packrat parser, and we compare to similar efforts within other projects. Compared to competing approaches to data structure verification, our system includes much less code that must be trusted; namely, about a hundred lines of Coq code defining a program logic. All of our theorems and decision procedures have or build machine-checkable correctness proofs from first principles, removing opportunities for tool bugs to create faulty verifications.

Adam Chlipala has been telling us how underutilized Coq's Ltac tactic programming language for proof automation is for years. Here is the... er... proof.

Various binding styles in OO

Perhaps not great news to most, but I found this study of late binding in OO languages to help me [1] [2] better understand how the term is context-sensitive . It appears that Smalltalk was the most flexible/robust in some sense.