More information about Albatross

The Albatross web page has been updated with information about the full language and the release plan.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Btw, there's some sort of typo in the second bullet at the top of the page, either an extra word "and" or a missing noun. Emphasis added:

  • A Proof Assistant: Arbitrary mathematical theories and can be expressed and proved in Albatross

Thanks. Typo corrected.

Thanks. Typo corrected.

Related work

Interesting work!! I would love to see a related work paragraph discussing proof assistants which are also programming languages (in the sense of executable programs), as Idris is.

Thanks for the hint to

Thanks for the hint to Idris. I have not known Idris before and I will certainly look at it.

My first impression: You can do proofs in Idris and write executable programs. However I have found no hint in the documentation that Idris requires you to specify correctness conditions for your program and verifies that the correctness conditions are satisfied. It seems to me that proving and programming are two orthogonal things in Idris (but I might err because I have just flipped through the documentation rapidly).

Other systems I know are Dafny and Chalice. Coq, Isabelle/HOL, Mizaar, ... I would not consider, because they cannot create executables (even if some of them allow extraction of Ocaml/Haskell modules).

Any other hints to comparable systems are welcome. You are right: I have to add some section in my documentation which describes related work.


I am unconvinced by your remark that those system are not comparable because they cannot create executables. People write certified programs with those tools and run them in practice (after an extraction pass maybe, but that is just an implementation detail, a form of compilation). The question of how writing an executable certified program differs with these tools and with yours still stands.


I agree with your statement that you can look at extraction as a pass of the compiler. However you are limited by the capabilities of your source language.

Let's consider e.g. Coq with extraction to Ocaml (because I have some experience with Coq).

At the beginnig of my project I had the idea of writing the Albatross compiler in Coq and then extract it to Ocaml for compilation and execution. However Coq doesn't allow me to write imperative constructs which in my opinion are necessary if you want your compiler to execute with reasonable performance.

Then I thought of writing the pure functional part (i.e. everything which is expressible by Coq's definitions and fixpoints) in Coq and the rest in Ocaml. However glueing the Ocaml sources and the extracted functions is a nontrivial task and not supported by any tools. Furthermore certification would just cover the pure functional part.

So I threw away this idea and wrote my compiler in Ocaml and missed the possibility to certify parts of my program.

Have you looked at CompCert?

I've never run into a verified proof assistant, and I've heard similar stories*. But there's a well-known C compiler written (mostly) in Coq, CompCert. Have you looked at how they do things? CompCert has been validated externally in work by John Regehr in testing C compilers, and they confirmed that CompCert's quality is excellent, so even without full verification the experiment is worthwhile.

Imperative programming in Coq is supported (in principle) by efforts like Ynot (, but I wouldn't bet on it being readily usable, so you're probably right on imperative programming not being supported. But there's quite some experience in *not* using side effects in a compiler (GHC, CompCert), so I'm not convinced that you need side effects for *acceptable* performance.

*For instance these slides (, which however are from 2004!

CompCert is an interesting

CompCert is an interesting example. The compiler has 3 passes. Pass 1: Lexing, parsing, type checking, creation of abstract syntax trees. Pass2: Transform the Source code ast to assembly ast. Pass3: Assembling and linking.

The CompCert team, which is highly experienced in using formal verification, had chosen to formally verify only pass 2 i.e. writing it in Coq and extracting to a compilable language. Tree transformation is a task which can be done well by functional languages by using inductive data types and recursion and and prove them to be correct by using induction.

Pass 1 and 3 are not formally verfied. The reason might be that these passes usually use imperative constructs.

Don't get me wrong. I am really impressed by the results of the CompCert team. But the CompCert example also shows how difficult it is to write formally verified code.

Not related to imperative constructs

Your presentation of CompCert is a bit off. The compiler uses many more passes than this, and your presentation with "macro passes" (aggregation of actual compiler passes) does not do justice to the proportion of verified vs. non-verified code/logic.

The current structure of the compiler is documented in the user manual (the main CompCert webpage has some nicer-looking schemas but they appear to be a bit out of date). The link I just gave has a precise graph with all the passes, their verification status. I count 17 passes implemented in the compiler (three extra arrows are performed by external programs, namely cpp and an external assembly/linker), 3 of which are unverified.

The main reason why the "extremal" parts of the compiler (from source text to AST, and from assembly AST to assembly) are not verified is that they have no proper specification to verify against. What is the specification of an assembly printer? You could say "this piece of AST must print this", but then the verification is exactly repeating the implementation, it is no more or less trustworthy. You cannot really verify programs whose specification *is* the implementation.

It is the same thing with the elaboration pass that goes from a full-C-language AST (produced by a parser) to the more abstract AST used by the compiler. The abstract AST has a formal semantics than people familiar with the C standard can read and say "yes, this is a correct formal semantics". By contrast, the desugaring process from concrete to abstract syntax tree has a semantics that is exactly its implementation. Similarly, type-checking for C programs is (trivial and) essentially specified as an algorithm.

There are two notable exceptions to my point that those extremal parts have no specification to verify against other than their implementation:

- The *parser* is a complex object (parsing automaton etc.) generated from a more declarative *grammar* (if you forget about lexer hacks etc.) that can serve as a specification. CompCert uses a certifying parser generator that will produce a readable formal specification of the grammar, and prove that the generated parser verifies it. In this sense the parser is verified.

- If, instead of producing assembly as source text to pass to external assemblers, you decided to directly produce a binary stream yourself, there would be things to prove against the specification of instruction layout. To my knowledge the CompCert project does not attempt this (it produces a textual assembly file, with a validation tool to verify that the final ELF executable is not too dissimilar to the abstract representation it started from), but there are other compiler verification projects that worked on this, see for example Coq: The world’s best macro assembler? by Andrew Kennedy, Nick Benton, Jonas Jensen and Pierre-Évariste Dagand, 2013.

To summarize my point, the proportion of verified vs. non-verified code is much higher than your description suggests, and the verification boundaries are not defined by convenience, but by the fact that "verification" loses its meaning at the limit where there is no specification less complex than an implementation.

In fact, a large part of the verified compiler works on control-flow graphs, so it is not at all about "tree transformations" that would be more convenient in a pure language.

I think it is true, however, that having efficient mutable data-structures would sometimes be convenient for Coq-using programs. CompCert went surprisingly far with pure code only (using of course monads etc.), demonstrating that it is in fact not that important for sizeable compilers, but for exemple the Verasco static analyzer pays a performance overhead for not having real hash tables available. Some people are working on that (in several different directions).

Sorry, I have used the

Sorry, I have used the highlevel presentation available on the CompCert project. If this is not accurate, then don't blame me.

I highly appreciate what have been achieved by the CompCert project and I have no intention to make their achievements smaller than they are. So thank you for the clarification.

However the point I wanted to make is different.

Using Coq+extraction is not too difficult for language elements which are already present in Coq like inductive data types and recursion i.e. very pure functional programming.

Things get much more complicated if you want to introduce imperative elements. Adam Chilipala has described in his Ynot project how to represent mutable data structures like counters, queues, etc. in Coq. The problem: If you are not a very experienced user of Coq then you have difficulties to understand it and more difficulties to use it in a real project.

Obviously the CompCert team have succeeded in using imperative elements (as you described by using monads). But I am quite sure that the price to pay was high.

Idris supports proving programs correct

However I have found no hint in the documentation that Idris requires you to specify correctness conditions for your program and verifies that the correctness conditions are satisfied. It seems to me that proving and programming are two orthogonal things in Idris (but I might err because I have just flipped through the documentation rapidly).

Idris is in this respect no different from Coq, Agda, and so on. You can write a specification and prove that your program satisfies it, and that's optional in all these systems (let me ignore totality which is trickier). Idris is more geared toward "practical" programming, where you don't verify *everything*, so docs might have a different slant.

In fact, some specification is required: type signatures aren't optional, and there's no boundary between type signatures and specifications in these systems (because of Curry-Howard, that I won't try to explain here).

Also, I'm not sure how could a system "require you to specify correctness conditions" in a sense that's not shared by Idris. Even if a tool required a specification, the spec might accept arbitrary functions — the type signature required by Idris is at least as restrictive (if not more).

Complete verification

My goal in Albatross: All source code is verified. The compiler won't let pass anything which is not verified.

Up to now I have provided only version "0.1" which does not yet have processes, mutability, imperative constructs etc. and it already took me 2 years to arrive at version "0.1". It might be that I need two more years to arrive at language completeness.

But I am determined not to deviate from the goal that all source code has to be verified.

What is "verification"?

The tricky question is - what does it mean to be "verified"? You can only verify something relative to a specification, and if that specification is as vague as "the program should do some IO" then the compiler is going to be happy with that. In this sense, in Idris, the compiler also won't allow anything which is not verified, because it won't allow anything which isn't well typed.

This is, by the way, leaving aside any pragmatic issues such as: during development it's useful to work with partial functions temporarily; if you want to interact with external libraries written e.g. in C you might need to relax things a bit; sometimes there are things we know but are hard to prove formally; etc. There is a totality checker, but it's up to the progammer whether to require totality or not.

I think Albatross and Idris share a lot of goals, and it's really good to see more experiments in this space. We don't yet know enough about how programming languages should support program verification, so we're bound to learn something interesting by trying a few different approachs.

I'd recommend taking a closer look at Idris anyway (though I would say that). We have a few interesting examples such as a library for reasoning about concurrent programs (github repo; draft paper) a precisely specified Hangman game (source; relevant paper) and a mini space invaders game.

There was a JFP paper last year describing the details (preprint available here) which goes into far more detail than I can here...

Here ends the shameless self promotion. For now :).

Good point! As soon as we

Good point! As soon as we can verify a program against its specification, the specification will be the next weak point and the compiler won't help here.

If your program specification is "The programm writes some string to standard output" then the Albatross compiler will certainly accept this program as long as the program actually writes something to standard output.

However in Albatross the program cannot do anything else like open secretly a socket and read some files from your hard disk and send the context to some clandestine server. And this is not bad either.

But anyhow thanks for the feedback and for the links to the papers.

Very Ambitious

I'm happy to see this sort of work.

Do you believe the provers and compilers will be fast enough (or make sufficient use of caching) to be acceptable in industry?

Are you doing any verification on the compiler itself? Speaking from inexperience, it seems like it would be difficult to validate the final transition from high level intermediate code down to messy machine code (though Adam Chlipala's work on Bedrock is very interesting).

What sort of assertions and proofs can be formed for concurrent Albatross processes? Can you prove there are no race conditions on a particular channel, or that a particular composition of processes is Church-Rosser, or that messages between processes are eventually delivered and received, or that message delivery is timely?

Thanks for the feedback. Let

Thanks for the feedback. Let me try to answer your questions.

1. I don't think that the performance of the Albatross compiler/verifier will be a problem. The compilation unit in Albatross is the module whose implementation part can be compiled/verified independently. The compiler keeps track of module dependencies and does recompilation work only if necessary. Up to now I have not written any Albatross module whose compilation/verification takes more than some fractions of a second.

2. Clearly the best would be to write the Albatross compiler in its own language. But currently am writing the compiler in Ocaml and concentrate on getting this done. I do not plan to generate intermediate and machine code to generate executables. I intend to generate the code in some other language like e.g. C and let the corresponding compiler generate the machine language.

3. For processes Hoare's theory CSP (concurrent sequential processes) will be used. You can write assertions about a process by writing assertions about its traces. On my Blog I have described some of the basic concepts.


You mentioned that you thought about certifying the theorem prover but didn't because of difficulties with Coq. Do you have an informal proof of the consistency of your logic? Is there another logic you can point to that has similar semantics to yours?

I don't use any special

I don't use any special logic.

The proof engine works with some very simple inference rules (like the modus ponens rule, substitution of variables in universally quantified statements etc.) which are all described in the documentation pdf of Albatross.

In the base library very few axioms are used like the double negation law in the module "boolean" and the Leibniz equality rule in the module "predicate".

These inference rules and axioms are equivalent to classical predicate logic.

Since there are just a handful inference rules and axioms and they are very simple to read, it is easy to review them and verify my claim that they are equivalent to classical predicate logic.

For instance, substitution

It is well known that substitution is a pain to get right, and a bigger pain to formally prove right. If you're writing a theorem prover, you know that, so I'll omit references. I also expect you to know that such bugs often go undetected for years.

However, you claim that verifying correctness of Albatross is "easy" without qualifying the statement the way I'd expect. How come?

I have just said that it is

I have just said that it is easy to verify that the logic used in Albatross is equivalent to classical predicate logic (by reviewing the documentation).

This does not mean that the correctness of the implementation of the Albatross compiler is easy to verify. But this applies to all proof assistants/theorem provers. E.g. the implementation of the Coq compiler is not verified either.

There is one technique which mitigates the situation: It is much easier to verify the correctness of a proof than to generate it. The Albatross compiler satisfies the "de Bruijn" condition because it generates for each proof a so called proof term and the verification that the proof term actually proofs the theorem is easy and contained in a very small fraction of the code. The Albatross compiler internally verifies the correctness of all proof terms before it enters the corresponding theorem into the database.

In principle the Albatross compiler could write the proof terms to file and an independent programm could verify that the generated proofs are correct.

You'll get the invariable

You'll get the invariable question to deliver soundness and completeness results on the logic employed.

Self Implication

Reading the deduction rule, you take implication a -> a which you prove by modus ponens, but modus ponens is not an axiom of logic, more a method of proof. I don't intend to say you should not use it in your language, and it is also possible I misunderstood your documentation and you don't actually do this. In any case, I am working on something similar (a logical proof system for an imperative/functional language) and wanted to discuss my concern about this method.

I think a -> a proves nothing. This is probably because I am taking an intuitionistic approach and therefore ¬a/\a is not necessarily true. I think this is equivalent of the following (incorrect) logical process:

if "it is raining" implies "it is raining" then "it always rains"

I think the whole proof becomes conditional on the assumption of a. If given a -> a we then prove b given a true what we have really proved is a -> b.

So (note: I am using "always" to emphasise that the statement "I wear my boots" on its own is unconditional and true for all time).

if "it is raining" implies "it is raining", and if "it is raining" implies "I wear my boots" then "I wear my boots when it is raining".


if "it is raining" implies "it is raining", and if "it is raining" implies "I wear my boots" then "I always wear my boots".

Deduction rule

I don't understand your remarks.

I use inference rules which you can read in any textbook about logic. The most prominent ones are the following:

                         C,a |- b              C |- a    C |- a ==> b
--------------           ------------          ----------------------
C,a |- a                 C |- a ==> b               C |- b

There is always a (possibly empty) context C which is the set of assumptions. The first one reads "In any context C augmented by the assumption a you can prove a". In Albatross this inference rule is exploited in the proof


The second one is the deduction rule which states: "If you can prove b assuming a then you have proved the implication a ==> b". In Albatross the deduction rule is used in the following proof:

    a ==> a

The third one is the modus ponens rule which states: "If you can prove "a" and "a ==> b"in the context C, then you can prove "b" in the same context". In Albatross such a proof reads

    a ==> b

I have left out the variable declarations because all are of type BOOLEAN.

You see that there is no magic involved here. Just the application of fairly evident rules.

Rule no. 1

I think I misread your documentation then, as I read it as you were taking:


as a proof of "a". However it looks like you are proving "a -> a" from it which is fine as thats effectively the same thing. So:

    a ==> a

is fine but:


Would not be. Although I might still be getting it wrong as I find "ensure" a bit of an odd choice. Perhaps "implies" would be better?

    a ==> a

But perhaps you are doing something different?

Now it seems that we are

Now it seems that we are talking about the same things.

And evidently: Given "a ==> a" there is no chance to infer "a" from it.

The choice of the keywords: I use "require" and "ensure" clauses in proofs and in procedures. The require list is the set of preconditions, the ensure list is the set of postconditions.

For procedures you can read: The procedure requires that the preconditions are satisfied on entry and ensures that the postconditions are satisfied on exit.

For proofs you can read: The proof assumes that the preconditions are valid and guarantees the validity of the postconditions under these assumptions.

I do not want different keywords for procedures and proofs.

I reuse the same keywords for functions. Partial functions require that the preconditions are satisfied on entry and guarantee that a value is returned which satisfies the postconditions. In addition to procedures in defining functions it is required that the postconditions specify a return value which exists and is unique.

If you propose a pair of keywords which are meaningful in all three cases then I might reconsider my choice of keywords.

Pre and Post Conditions

I really just wanted to check my interpretation of 'ensure' was correct, there's not really any need to change it in that case. I prefer to think in terms of logic, rather than pre and post conditions. The rules are reversible and don't really have a time-order, so pre and post just seems a bit too imperative for logic. I think this may highlight the difference in our approaches. I am starting from logic and its equivalence to the type system of a pure functional language, and then the imperative code has a monadic type.

I agree that the names pre-

I agree that the names pre- and postcondition give the impression of a time order. But note that the keywords are "require" and "ensure" and these don't impose a time order. The require clause gives a list of conditions under which the expressions in the ensure clause are valid.

In my opinion this reads well for proofs.

The term pre- and postconditions should be used only for procedures.

Classical versus Inconsistency Robust Logic

The general form of the chaining rule is:


The following rule is valid only in classical logic:


because implication(i.e. ⇒) in Inconsistency Robust logic is much stronger than inference (i.e. ⊢) and in fact:


See Inconsistency Robust Logic

I have not the slightest

I have not the slightest idea what you are talking about.

Are you saying that classical logic is not adequate? But why?

Classical logic makes errors reasoning about inconsistent info

Classical logic makes errors reasoning about inconsistent information.

See Invalid classical proofs using inconsistent information

Classical logic inappropriate?

You did not answer my question. I have chosen classical logic as the reasoning base for the proof engine in Albatross. Is this choice wrong for this purpose and why?

I don't know what "Inconsistent Robust Logic" is and I have never heard of it. Most theorem provers / proof assistants use either constructive logic or classical logic. To the best of my knowledge none has chosen the logic you propose. Why?

Classical logic appropriate only for a small number of axioms

Classical logic is appropriate only for a simple theory having a small number of axioms with strong evidence of consistency.

Practical Inconsistency Robust Direct Logic has only recently been invented for which inference engines are being developed. There is some work on previous less practical paraconsistent logics that are not fully inconsistency robust.

law of excluded middle

Hewitt champions a logic when the law of the excluded middle is wrong. (And it is often wrong when definitions are sloppy, as in many things defined in informal language. Vagueness inverts poorly because it remains vague.) Edit: just explaining where Hewitt is coming from.

Excluded middle OK for classical logic; not inconsistency robust

Excluded middle is OK for classical logic, but not for inconsistency robust logic.

See Excluded Middle cannot be part of Inconsistent Robust Direct Logic

The obvious question is why

The obvious question is why one would want a logic to be inconsistency robust, and the answer that leaps to mind is that you'd want it if the information you're reasoning from is suspect. This would make sense to me, but I've not noticed anything from Hewitt to suggest this is what he has in mind. One can imagine situations where one would want to reason from suspect information; however, it is not at all evident that static program verification would be one of these situations.

Complexity and scalability require Inconsistency Robustness

Complexity and scalability require Inconsistency Robustness. The only way to be sure of consistency is to keep an information system small and simple.

self doubt as a form of discipline

Confirmation bias is a problem in self-assessing one's certainty of conviction in a thing. A programmer can say "this code has no bugs" and a mathematician can say "this definition is not vague", and yet both can be wrong. Being wrong is so common that programmers get used to it, so it stops being embarassing, and it's in no way rude to tell someone they are wrong because it's a reasonable hypothesis.

(If you play video games a lot, at first it may be upsetting to get killed, but eventually you just say whatever and keep going when games merely make you back up a bit. You can get so blasé about dying in games, you no longer remember how shocking it was at first. Then it's like untied shoes.)

Around the mid 80's, a professor at a dinner told me how disconcerting he found the reactions of recent students to being told they were wrong: they thought it was rude instead of normal discourse. They had been told everyone's opinion was equally valid because everything was relative, and there were no more absolutes, like being wrong. It put the students in a froth, like being told they were idiots, as opposed to merely mistaken. They went to the dean about it: we demand he stop saying we're wrong. Many folks with gray hair find this attitude astonishing, that correction is rude.

It's better to be amenable to being wrong, so it's easier to find out. Perhaps inconsistency robustness is an attempt to tickle contrary evidence that might otherwise be hidden away by bias.

Inconsistency is unavoidable in large systems

My interpretation of what Hewitt is saying: Inconsistency is unavoidable in a large distributed system and this gets worse as systems scale up. It is not that the information is "suspect" but that the same bits of information may reach different parts of the system at different times and in different order. So these independent parts may be working with mutually inconsistent data.

To make sense of how such a distributed system as a whole may behave, one needs something more than first order logic. I have done no more than quickly scan a paper on IRDL so I could be all wrong. But the idea is appealing. We know from experience that consistency protocols in distributed systems get quite complicated and don't scale well. So why not drop the idea of global consistency and see where we land?

Not sure what this has to do with static program verification though.

Missing the point

Inconsistency may indeed be practically inevitable when dealing with a lot of data, but if you're dealing with data then you are, just as I said, not certain in the first place. That's the point. Mathematics is concerned with things that are true in the Platonic realm, not allegations about what is true in the physical universe. Hewitt is apparently not interested in reasoning in the Platonic realm, which makes some sense since the evidence suggests he's not well equipped to do so but denies the possibility that others might have ability and valid reason to do something he isn't interested in doing.

re: data and programming

Programming is not especially concerned with the Platonic realm.

Programs are data, and data is a (sometimes trivial) program. Some PL designs embrace this to a greater extent than others. Mine does, too, contributing to my interest in visual DSLs. Programs are also updated on a regular basis, which creates many consistency concerns (API versions, dynamic software update, serviced objects, live coding, etc.).

Inconsistency seems inevitable when dealing with a lot of program at any large scale, whether it be time (updates) or space (distribution) or number of contributors or quantities of code. I can appreciate the role of logics and programming models robust to inconsistency at large scales.

OTOH, I think Hewitt takes a much more extreme position, that we should embrace inconsistency robust models even at very small scales. I believe this abandons a lot of useful middle ground, such as having islands of internal consistency, and protocols or bridges that are consistent for some short-lived periods. Works on Albatross, Agda, Idris, ATS, Coq/Bedrock, and similar can do a lot of good contributing to those islands and bridges.

Together, a large number of consistent theories is inconsistent

Patching together a large number of consistent theories results in a system that is massively inconsistent.

That's an interesting claim.

But I think it would depend on how you patch things together. E.g. if you track who believes what (perhaps with a modal logic) then you don't automatically gain inconsistency from composing subsystems containing conflicting beliefs.

Camouflaging inconsistency behind "He said, she said."

Of course, it is possible to camouflage inconsistency behind "She said, he said."

compartmentalization, not camouflage

We aren't hiding the inconsistency. If anything, we've gained a handle to work with and discuss potentially inconsistent subsystems, and hence a basis for bridging them.

Inconsistency robust logic for "She said. He said."

Inconsistency robust logic is needed for reasoning about inconsistencies in "He said. She said."

I don't think it is

Proof by contradiction works very well in dealing with people who are wrong. You suppose that what they say is true, reach a contradiction, and then conclude that something they said was wrong. But I have to get back to work, so this will be my last post in this thread.


If you assume something is true, and then prove it leads to a contradiction, you have not proved it is false. It is possible assuming it is false will also lead to a contradiction.

Case studies of inconsistent information systems


Do you have a specific example?

PS. There is a famous of example of "Catch-22" by Joseph Heller :-)

See Catch-22 in Inconsistent Robust Direct Logic

All witnesses say things that are inconsistent

In a complex situation, all witnesses say things that are inconsistent. Trying to figure out what happened can be incredibly difficult. Reasoning using Proof by Contradiction can be a helpful tool.

Degrees of belief and provenance

Addressing inconsistency through the rules of inference seems wrong to me. If you find an inconsistency, you generally want to root it out and eliminate it, not embrace it and keep on computing.

"He says" and "she says" don't just mask inconsistency. They are useful for tracking why you hold beliefs so that you can do the rooting when you find inconsistency. This also allows you to estimate how strongly you might want to believe in a particular logical conclusion.

Such capabilities seem important for dealing with inconsistency. Just throwing out certain inference rules seems like a simplistic solution.

Inconsistencies cannot be eliminated from complex info systems


Inconsistencies cannot be eliminated from complex information systems. In fact, most of the inconsistencies cannot even be specifically enumerated.

Microsoft infamously tried your approach by stopping with the "blue screen of death" when Windows found an internal consistency. These days Windows can often recover and instead of the blue screen death discretely notifies the user that it has "recovered from a serious problem" and continues execution without rebooting.

Classical Direct Logic for small, consistent axiom systems


Classical Direct Logic is for for small, consistent axiom systems.

In computer science, first-order logic is inadequate even for small, consistent axiom information systems.

See Classical Direct Logic

PS. It weakens your case to use ad hominem attacks :-(

Inconsistency Robustness for large software systems


Inconsistency Robustness is required for reasoning about large software systems because of inconsistent specifications and inconsistencies between implementations and specifications.

¬¬Ψ is equivalent to Ψ even if Ψ is ambiguous

¬¬Ψ is equivalent to Ψ even if Ψ is ambiguous.