most water-tight real-world language implementation?

As a long-time developer, I've created more bugs than is even remotely funny, so I'm not trying to make this a complaint or a witch hunt. Rather, I would really like to know how realistic it is to avoid bugs in the 'fundamentals' (relative term) of a programming language, and still have a language that is use/able in the real world? Testing & Modeling are the answer, I assume, but who has really applied them rigorously in the development of a programming language? It scares me that the tools we're supposed to build on might not be hallowed ground from a quality standpoint. (Yes, I know there are good reasons to trade off quality vs. speed.) I feel like there's something about "obviously no bugs" vs. "no obvious bugs" here, mixed in with issues of how the programming language will affect the program language; the simpler the programming language, the more we're just shifting the bugs out to all of the programs?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I am trying this

To the extend that yes, I guess, when finished, it'll be one of the cleanest implementations of a GPL.

Doesn't mean I can't expect bugs for the coming few months or so, but when those are solved - yup, essentially bug free.

how do you accomplish that?

testing? model checking?

I especially liked Meyer's on that

On that sort of things, I can't help to remember the points B. Meyer made, in his "Eiffel -- The Language" books, especially on the real difficult task it is to design properly, with much care, the essential features of a language (i.e., not just the design goals, but also the rationale behind those).

And that, of course, being way before the language's creator go on with elaborating on the semantic aspects, and the syntax.

It seemed to me that, among others, an important conclusion was that the seek for orthogonality and expressiveness in the simplicity in the language's design (vs. feature-bloat, surrendering to clientelism, etc) is, indeed, not an easy path to take and will require you much efforts, anyway.

Do you mean a verified

Do you mean a verified language implementation? I don't know of what's out there right now, but if you follow the work of Adam Chlipala, Greg Morrisett, Chris Hawblitzel, David Walker, Xavier Le Roy, etc., we're getting close to 'real' verified languages. More generally, anyone working on the POPLMark challenge.

A big stumbling block is connecting to machine models -- assuming some assembly code semantics, building a verified OS/VM TCB, etc. However, from an outsider's perspective, getting unoptimized languages implemented with end-to-end verification seems just like a matter of time -- say 5 years, and now largely a question of how much we want to weigh gruntwork** against tricks (with the intuition that shorter checkable/interactive but manual proofs are more trusted than long ones).

Another is verified optimizations -- but then again, I think work like sketching/synthesis plugs in well here. Supporting parallelization optimizations is one of the worst things for verification, but there is surprising progress here, and part of me suspects that this is the only way we can really get a lot of optimizations in there anyway (GIL anyone?).

**This is not to belittle the researchers but a comment on the astonishing amount of regular progress being made over the past decade: you can now count on many POPL/PLDI papers being about verified implementations of otherwise established features.

re: POPLMark

i've looked at POPLMark before, but it was a while back -- certainly exciting, hopeful stuff, especially with a purported complete solution already! but i wonder when/how those will be main-stream-able?

I have some NDA issues here,

I have some NDA issues here, but, again, the important thing is the question has shifted from 'if' to 'when', suggesting it's increasingly a prioritization and aesthetics issue now (... the SOSP work, at a meta level, shows that if you have a big team, you can do it). There are still problems, obviously, but a lot of pieces are coming into place.

E.g., I'm working on declarative specifications of CSS, Arjun Guha has a TR on the mechanized op semantics of JS, etc., and the previously posted projects are coming out from the other (lower) end -- in 5 years, I wouldn't be surprised to see an end-to-end browser verification project that puts the pieces together. Heck, when you factor in managed browsers and stuff like NaCl and Chrome, and maybe even DieHard and Rinard's failure oblivious computing, the import of much of this decreases.

scary world

It scares me that the tools we're supposed to build on might not be hallowed ground from a quality standpoint.

Get used to it: we live in an imperfect world where anything could break. Formal verification is (and still is) very expensive and often impossible, so most people put their eggs in the testing basket.

This is not really that bad, a language like Scala, which requires a pretty big compiler, is still mostly usable even with the occasional bug. Why? Because the bugs are usually catastrophic in their effect and so are very easy to spot/diagnose as a problem with the compiler. This is very important: if a bug in the compiler caused your program to behave slightly differently, then you might not ever know where to look.

There was some research a while back at HP about building a computer system out of very unreliable parts (perhaps extremely overclocked), using redundancy to make up for unreliability. This is also a common strategy in aerospace (2+ redundant systems of completely different technologies). If you really care about reliability in a PL, it might make sense to write your program twice using two different languages, then compare the results.

Heck, alpha particles might

Heck, alpha particles might flip some bits, so we should also use a fault-tolerant assembly/calculus (http://portal.acm.org/citation.cfm?id=1250741) and GC (http://prisms.cs.umass.edu/emery/index.php?page=diehard) for those languages :)

re: write it twice

i get the half-joke-half-seriousness. but wasn't there research that indicated it doesn't work simply because humans tend to gravitate towards too-similar approaches? i do not know how to find out where i read that. :-(

I was kind of being serious.

I was kind of being serious. If reliability is really really important to you (e.g., you are writing code for the 787), there is going to be some sort of redundancy plan in addition to testing and formal verification.

If you only care about language implementation reliability, you can write it twice since...you are only worried about language implementation mistakes vs. your own mistakes. Also, forgot to mention, if you find a mistake, you'll might have to write it a third time in another language so you can figure out what language implementation is wrong (unless its easy to tell).

One of the best experiences of my early career: I wrote a Java bytecode verifier at the UW for a research project (with Emin Gun Sirer and Brian Bershad, fairly recognized systems researchers). We were able to fix all of our bugs and find bugs in other bytecode verifiers because....we were able to test against the Sun and Microsoft implementations. We developed simple scripts to automatically create lots of test cases, and then mined the results for things like new york times articles.

This was much more effective than formal verification at the time, though admittedly very shallow (our best success was inserting one byte flaws into valid but huge class files).

John Knight and Nancy Leveson

You might be refering to a paper (and a controversy spawned by it) by John Knight and Nancy Leveson. The title is: "An Experimental Evaluation of the Assumption of Independence in Multi-Version Programming", and it appeared in IEEE Transactions on Software Engineering, Vol. SE-12, No. 1, January 1986.
In that paper the stochastic independence of the behaviour of SW that was assumed to hold is not valid. A naive approach to N version systems can not justify an increase in reliability.
On the historical side: That is a long time ago, and as one can see the myths it addresses are still alive in the heads of many of us. It seems that the ensuing controversy caused bitterness in the fault tolerant community. The second author is now a Professor of Aeronautics and Astronautics at MIT "bowing out gracefully (and bloodied) from the software fault-tolerance community"

Re: Leveson

good sense of humor on her mit home page cf. "never to do anything like it again" and "Because people keep reinventing n-version programming although they should know better..."

If you really care about

If you really care about reliability in a PL, it might make sense to write your program twice using two different languages, then compare the results.

Most smart C++ programmers I know who write Ruby code b/c they're sick of C++, test against JRuby, Ruby, and IronRuby. If you were to take the union of these three, and use all "ruby language programs" in existence with unit tests, you'd have a friggin' huge test suite.

Haskell

IIRC, there is a verified Haskell (subset) -> C compiler, which has been used to build a (small) verified web service. Well, verified if you assume the C->assembly conversion is correct. And if you assume that the processor is correct. And if you assume that the disk and network drivers are correct. And if you assume that no one surreptitiously corrupted any of them implementations to their benefit.

That's as close to water-tight as anyone has been willing to pay for.

--Dave Griffith

Ioke

Ioke, a new JVM language, lists as one of its features "Developed using TDD". I have no experience with it or looked at it in detail to know how rigourous it is.

That's pretty much the norm

That's pretty much the norm for compilers -- the folks I cited are trying to move away from testing and towards verification (and by verification I don't mean incomplete-in-practice software model checkers).