The Origins of the BitC Programming Language

The process of programming language creation is a subject of too little reflection and retrospection. Newcomers to the field (including, in some measure, us) regularly propose new languages without quite understanding what they are getting into or how big the task is.
... This paper describes the motivation and early design evolution of the BitC programming language, making our excuses in hindsight and providing a compilation of some of the things that we learned along the way.
... The ML module system [18] is fully understood only by David MacQueen, and only on alternating weeks. The Scheme module system [8] required the combined brilliance of Matt Flatt and Matthias Felleisen (and six revisions of the language standard) to achieve. From the perspective of a mere operating systems weenie, it's all rather discouraging, but what is the reluctant programming language designer to do?
... The real reason we stuck with the LISP-ish syntax is that we didn't want to be fiddling with the parser while designing the language, and (to a lesser degree) because we didn't want to encourage programmers until we thought the language was ready.

The Origins of the BitC Programming Language Warning: Work in Progress

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Good

This is timely. And well written.

Four comments, I guess directed at shap:

  1. Shapiro's group at Bell Laboratories had built the first commercial application ever constructed in early C++, which provided an opportunity to watch Bjarne's efforts at relatively close hand — In the light of comment four below, what do you make of Bjarne's complaints about how the referees he has encountered since returning to academia jave not taken the problems of industrial systems programming seriously?
  2. our comment above about needing a more general solution for safe systems code construction at varying levels of automatable confidence remains (in our view) valid — I agree, but I think this claim would scan better if you had provided a succinct characterisation of the domain of system programming;
  3. Linux isn't a well-structured system — Likewise for the class of well-structured systems;
  4. It is noteworthy that none of this effort was deemed fundable by the National Science Foundation (which is to say: by the academic programming languages community) — It is interesting, isn't it? Especially given the interest shown in eros/coyotos in the wider hacking/LtUish community, as well as the cited commercial interest.

Well, if there is sufficient

Well, if there is sufficient commercial interest to motivate this R&D, is there really much need for grants?

Commercial PLDI

From what I gather, it seems to be hard to make a return on investment into PL development unless you are a firm who sells hardware (eg. Nvidia) or whose business model depends upon getting platforms adopted (eg. Google). I guess commercial interest doesn't actually generate non-trivial amounts of money for shap's group.

Charles is right

Commercial interest in BitC is not generating revenue for anyone. Five years from now, when we can sell training in methods and design processes, we might make money off this. Not today.

This is potentially perilous, but...

This is probably unwise, but I'll attempt to answer Charles's questions:

  1. The effect that Bjarne describes is real, but I don't think it's a matter of bias on the part of academics. Two things seem to be going on:

    1. Most academics have no exposure to industrial issues, so the question of taking those issues seriously is largely moot. Few academics write code, and fewer of those write any large amount. It's a good way to lose a tenure case.

    2. In the academic community there has been a long-term focus on languages derived in some sense from logical frameworks. It is not unreasonable to say "Let's not fail by trying to study too many things simultaneously". That said, in my opinion it is now time for the rubber to meet the road again, and some elements of the academic PL community seem uninterested in that.

  2. Fair enough. That's why this is a work in progress. Seeing it surface on LTU was a mild shock, but that's the price of doing one's research naked and in public :-).

  3. Well-structured systems isn't a thing that lends itself to rigorous definition, but there are some litmus tests. One is the presence or absence of stated design principles and design invariants. Linux has social principles, but few technical principles, and no invariants. Further, Linux has no orderly dependency structure that can be clearly articulated. There is an alleged dependency structure, but it is not honored in practice.

  4. Having sat on my share of NSF committees — a process that I found singularly depressing — I think my summary on NSF is this:

    1. It is good to have a diversity of research funding criteria, and the consensus criteria is a good one to have represented in the mix. With the departure of DARPA from CS research funding, and the slow decline of NIH, we face a serious failure of diversity in selection, and American research is already suffering very badly.

    2. Sub-critical funding in many places is not an effective way to support research at any stage (exploratory, prototype, or transition). I have heard this labeled "manure spreading", and the comparison is apt.

    3. Consensus processes are conservative by their nature. They are not good at selecting disruptive (meaning "not the fad") research for funding. Fortunately, they make mistakes, so some good things get funded anyway.

  5. Even though NSF is moving to fill the "DARPA gap", I don't think they will succeed. The NSF selection process doesn't mesh well with efforts that require mid-sized groups and extended time periods.

A very senior "systems" colleague recently confessed that he was thinking seriously about leaving academia because he doesn't see any opportunities for interesting research anymore. He doesn't want to oversee single-student work, he doesn't want to run a center and become a research administrator, and he sees no funding opportunities in the middle.

And in many respects, that's not a bad summary for how my tenure case went down: through a combination of academic disinterest in engineering and a bad transitional state in the funding environment.

Just to be clear, there are quite a number of academics who do grok these issues. My comments should be taken as reflecting what seems to happen in the community consensus.

Err...

Seems to ignore a lot of literature -- e.g., the use of Modula-3 in SPIN and various systems projects at DEC/SRC. Also, FoxNet is not the only interesting systems project to use ML. The work by Birman's group at Cornell on implementing (and verifying) various group communication protocols is certainly relevant if FoxNet is. Finally, the advances that we see in the final form of BITC aren't really tremendous leaps. For instance, polymorphism and mutable local state have been integrated in many other languages (c.f., Clu), the initialization problem is well recognized and dealt with by, for instance, the "definite assignment" rules of Java (and considerably generalized in Cyclone), etc.

And in the end, I'm left wondering what the goal is. It started off wanting to support verification, then lowers the goal to a "safe language" but with a formal semantics, and then admits that there's no formal semantics.

Formal semantics

I don't think shap &co have actually given up on formal semantics. Not having put together a formal semantics is not the same thing as not being in a position to do so.

That paper is an informal

That paper is an informal note and a work in progress, so complaining that it's not yet complete and not in academic normal form is not really on target. It is a great amusement to me that academic publication venues complain about the absence of experience papers and then when they get them complain that they aren't in academic normal form.

SPIN's use of Modula-3 was important, but it was underdocumented, and remains shrouded in proprietary impediments even today.

The Cornell work is interesting, but there are no comparative benchmarks for group communication protocols, so it isn't clear how to do a comparative performance evaluation of that work. Clu, so far as I know, doesn't have a sound and complete inference mechanism. BitC handles non-local mutable state as well, and I don't know any other language that does that and has sound and complete inference.

Java's definite assignment rules aren't particularly relevant to global initialization order. Java resolves that issue by restricting globals to scalar constants. Cyclone is a more convincing argument, but most of its safety is dynamic rather than static. That didn't meet our requirements.

And as to semantics, I seem to recall a conversation with you in which you stated that Cyclone once had a formally hand-captured core semantics, but that you didn't maintain it. Be fair here. BitC today is about where Popcorn (a Cyclone predecessor) was in 1995, and we have a grand total of 1.25 people on this (and falling). How many of your complaints could have been answered by Popcorn at a comparable stage of maturity? We haven't captured a non-core semantics yet in automated form. We'll get there.

Finally, if it's all been done, please point me to a stable implementation of a viable language that I can use in production. As the paper makes clear, we didn't want to be language designers.

Greg: this wasn't worthy of you. Commercial utility isn't, and should not be, the mission for Cyclone or for most academic work. There are valid objectives in the world beyond academia, and you know that. This isn't the final form of BitC, but setting that aside, it might be easier to see the advances if you read the specification. Your comment on definite assignment suggests that you may not have really stopped to do that.

That was a bit harsh

I realized this afternoon that my response to Greg was a bit harsh. Not inaccurate, but harsher than I really intended.

It was an explicit goal of the BitC effort not to innovate in the academic sense in the area of language design. We consciously chose to limit our innovation effort to finding a simultaneous solution for polymorphism, mutability, and prescriptive unboxing in a sound and complete type inference mechanism. Go back over the literature and you can find solutions for any two of those, but not for all three. We also ended up pushing for immutability rather than constant aliasing. Several senior academic PL people (Greg was not among them) misunderstood the complexity of that inference problem, declared hand-waving wrong solutions, and were distinctly nasty and dismissive about the work they didn't understand because they hadn't troubled to actually think about it.

In every other area possible we sought to borrow in preference to invention. The other focal issues in BitC as a language were practical issues. Control over representation. An implementation that admitted link compatibility with C (which turned out not to be trivial). And I think we have gotten to that (or will have by December or January).

The mechanized formal capture part of the BitC work hasn't really started yet. The type system is formalized, so we do have a bit more than the paper implies, but that's not a complete capture and it is not yet mechanized. It is likely that the formal mechanization phase will generate innovations in the academic sense, but I frankly don't care. I'm no longer an academic, which means that I now have the burden of having to fund my work on an ROI basis but the freedom to build things that are pragmatically useful and engineered to completion. At the end of the day I'm an engineer, not a scientist. If we manage to get to a reasonable and usable level of function built around BitC in a verification environment without ever generating another academic innovation I will be amply content, and so will a lot of users.

Academia has generated many good ideas about languages, which is why those ideas existed for BitC to borrow. What hasn't happened in academia is comprehensive integration, and that's what we intend to accomplish with BitC over time.

We're not done yet by a long shot. R1 of BitC is hardly the final state of the language. If anything, it's a first inch-pebble. The interesting part is only now beginning.

Greg, or anyone: if there is prior work that you feel we have failed to credit in our core publications, that is something that we would very much like to fix, and we would very much welcome pointers and explanations of relevance.

Is BitC being used to build

Is BitC being used to build any large systems projects?

You seem to be taking a systems + security approach to the language design (a la Singularity or Joe-E), in which case that seems to be the metric of success (as it would be for most PL projects that are focused on finding the right set of features). At first I assumed that was the case, but now I'm not so sure. This has been distinctly missing from many of these other PL-driven projects -- they throw together a protocol or some other small piece of code, but it's unclear how they actually do in the large. Arguably, a higher-level language makes sense in the large (a la Singularity).

I'm curious as to where BitC falls into this space. I admit to not having closely sat down with all of the literature, but I didn't see it in the beginning of this overview, and it caught me off-guard :)

The Coyotos kernel has been

The Coyotos kernel has been co-designed with BitC. While the current implementation is in C (because we needed something running), it is written in a subset of C that has a direct translation to BitC.

Very Thoughtful...

...but I didn't find your original response to be too harsh at all, for what that's worth.

I very much look forward to both BitC and Ynot's continued evolution.

I was asking for it...

No, your original response was right. I was out of line here, so my apologies. In my defense, I meant my comments to be a reflection on the state of the document, not the project or the authors.

In service to the document: I do think that when taking on the design of a new systems language, one must look seriously at Modula-3. M3 in particular is a good, coherent, and relatively simple design that has proven workable for many systems programming tasks. And while it may lack features that make it ideal, one has to ask whether extending it to cover these deficiencies is an easier approach than a new design.

It would also be good to compare/contrast a bit more deeply with the C-derivatives that popped up (e.g., C-cured, Cyclone, Deputy, D, C-minor, C-minus-minus, etc.) I think they all hit different points in a complicated space. How is BitC distinguishing itself from these efforts? Yes, sound and complete inference is one dimension, but there are many others, such as porting effort or safety guarantees or portability or performance or ...

By the way, I would argue that Cyclone has an effective (if horribly complicated and ad hoc) approach towards inference that eschews "completeness" in favor of a set of features that go well beyond what BitC offers (e.g., first-class polymorphism, regions, type-and-effects, DML-style constraints, representation sub-typing, existentials etc.) all of which were added because we kept running up against limitations of traditional type systems. Clearly, Cyclone ended up as a big complicated ball of hair, so it's not really a good language design so much as a vector for type system research ideas. Nevertheless, as I watch BitC evolve, I worry that it is heading in the same direction...

Regarding formal models, I loved that the original BitC was coupled with ACL2 and had a goal of supporting strong verification. In fact, I would argue that starting with the mechanized model and working out is much more likely to succeed than doing a language design and implementation, and then trying to construct a faithful model after-the-fact. In the context of Cyclone, we tried adding a Hoare-style program logic to the language, but had to give up due to sheer complexity. I now think that having to develop and maintain mechanized meta-theory for a language is the most important kind of pressure one can bring to bear on the design. It enables so much in terms of communication and verification, but perhaps more importantly, forces the design to be simple and clean. These days, I would encourage someone starting on a project like BitC to start with Leroy's C-minor or better, Appel's Concurrent C-minor and think about adding language support (e.g., polymorphism, modules, a decent approach to achieving portability that does not derive from #ifdef, etc.)

So first, thank you for the

So first, thank you for the suggestions. I think we should try to address them, either in this document or a related one. Any pointers on Modula 3 that we should look at in particular? I have the language spec, but insufficient experience with it in practice to feel that I can say anything sensible.

And I agree that extending an existing language is often the best approach. Initially, we thought that we were doing that with ML. Overloading pushed us into type classes, and unboxing pushed us into the initialization issue. It's fair to say that BitC grew in some unexpected ways from this, but oddly enough very few of them had any impact on core semantics at all.

Time will tell, but I am hopeful that we have stopped adding stuff to BitC at the language layer. My sense is that many of the features you identify in Cyclone, when needed, can be handled in a prover or a static analyzer. At some level, it comes down to deciding where to implement them, which leaves us free to leave things out at the language layer. Over time, we'll probably decide that some things should migrate into the compiler, but we definitely see the compiler as just one element of a tool portfolio.

As I say, the core of BitC actually hasn't grown much from its "day zero" form. What I mean by this is that if you look at the language that remains after the polyinstantiator has run, you find that nearly all of the new features have been compiled out. I find this reassuring. We chose polyinstantiation as an implementation technique purely for C compatibility, but it has helped us confirm that the core really isn't growing. The notable recent exception was the introduction of objects and the consequent need to deal with V-tables, but I don't think that is large. The core semantics remains small, and I don't anticipate fundamental difficulties embedding it (or at least, not beyond the ubiquitous "prover hell" issues that go with all such attempts). Better still, I'm hopeful that we can define the reasoning strategy for pre-instantiated BitC by specifying the instantiation rules to the prover, which may simplify things. I hope. We're not quite there yet.

BitC still has a goal of supporting strong verification. What has changed in that respect is the result of several conversations. In C, the only confident positions are strong verification or pretty well naked. Static tools are useful, but there are limits to the confidence they can generate. We came to feel that there was a useful role for a safe strongly typed language by enabling more precise static checking and integrating some of that into the specification layer. This doesn't reflect any abandonment of interest in verification. It's more a sense that there need to be more points on the cost/value curve in formal techniques than "all" or "none". Here again I hope that we don't do innovation except for integration purposes. It would be advance enough simply to get existing techniques cohesively expressed in a single language framework.

As to ACL2, the great strength of ACL2 is the directness of its embedding pun. That is hugely simplifying and powerful, and I don't think we're going to emerge with something as elegant as they have. On the other hand, there are some deficiencies in the ACL2 approach purely from a prover perspective that we have been hitting while we mechanize the confinement verification, so it's not obvious that there is a net loss here.

As in all the rest, time will tell.

And Greg: thanks. It's important to me to acknowledge all of the bits we are borrowing from, and I do appreciate the feedback. If we had understood Cyclone better, and if there had been a prover embedding, we probably would not have gone this way. I hope (but I don't know enough yet to know) that we have arrived at a less ad hoc set of language choices, and that this will help us embed cleanly. As I say, time will tell.

The notable recent exception

The notable recent exception was the introduction of objects and the consequent need to deal with V-tables, but I don't think that is large.

I still don't think I understand that reason for objects. Aren't the benefits of objects already provided by typeclasses? A dictionary-passing implementation of typeclasses seem exactly like the v-tables used for objects, which implies that no expressive power is gained by adding objects.

Unless you're adding objects for the cases where you don't want to polyinstantiate, but then I wonder whether to polyinstantiate can be specified in the typeclass definition itself.

I also think the recently added has-field constraint can be handled via typeclasses, though there is clearly a significant performance gain from just passing a fixed offset instead of performing an indirect function call. Then again, since you're polyinstantiating anyway, is the cost really that great?

Hmm, perhaps I should have sent this to the BitC mailing list... :-)

what are typeclasses?

naasking: Aren't the benefits of objects already provided by typeclasses?

I understand objects and v-tables, which sound like C++, but I don't understand typeclasses and polyinstantiation, which sound like the sort of algebraic approach to languages I always ignore. Are they as abstract as they sound? Or are details as concrete as v-tables? Can typeclasses be made a space efficient as v-tables?

Incidentally, I've been enjoying all of shap's lucid writing, but I've been lurking without much to say. All the "grounded in practical reality" elements are enjoyable.

Typeclass usually refers to

Typeclass usually refers to the Haskell mechanism for associating a common name or operator with many different types. Typeclasses may also be part of a declaration for a generic function, specifying the necessary operators.

C++ achieves the same effect (one function/class name -> operates over many types) via other mechanisms, namely the static and template-driven forms of polymorphism (as opposed to the object-oriented sort). C++09 will go a bit further in this direction with the use of 'Concepts', which almost precisely match the effect and purpose of typeclasses in Haskell.

The implementation for typeclasses can be made more efficient than for traditional vtables (which require a runtime lookup), but the degree to which it is possible to make the typeclass access more efficient depends upon the degree to which separate compilation is favored over polyinstantiation.

And I'll note that polyinstantiation is hardly "algebraic". It is an implementation technique using a limited form of compile-time partial evaluation (and an associated partial specialization) of functions based on templates/typeclasses/etc.. C++ compilers typically polyinstantiate templated classes and functions (not that there are many viable alternatives in the absence of first-class types).

The implementation for

The implementation for typeclasses can be made more efficient than for traditional vtables (which require a runtime lookup), but the degree to which it is possible to make the typeclass access more efficient depends upon the degree to which separate compilation is favored over polyinstantiation.

Not true, it's both late binding (CC++ OO vs Haskell type classes). They both have the same run-time costs (conceptually).

Whole program

If you have whole program analysis (scroll down to "Type Classes), type classes can be compiled without dictionary passing.

RE: marco

Not true, it's both late binding (CC++ OO vs Haskell type classes). They both have the same run-time costs (conceptually).

Late binding is just one implementation option for Haskell type classes. As a design choice, it has many advantages when it comes to separate compilation and dodging a potential cost in memory footprint. But there isn't any reason that functions relying on type classes cannot be specialized, at compile-time, once for each type to which they are applied. Doing so doesn't even require whole-program analysis (though whole-program analysis would be extremely useful in eliminating duplicate efforts when combined with separate compilation).

C++'s virtual method based polymorphism has a different set of forces surrounding it. There is no means to avoid a runtime lookup when dealing with collections of abstract base class objects. This is because each item in the collection may possess a distinct 'actual' type. These forces imply the late binding when working with abstract classes.

For Haskell to encounter a similar problem truly forcing a late-binding implementation, it would need to allow collections of objects typed wholly by typeclasses... e.g. a collection of 'showable' values. Given that Haskell lacks side-effects and supports lazy evaluation, there would be very little value in such collections. At no conceptual loss you could create a collection of 'shown' values - essentially a bunch of lazily produced strings.

What David describes as

What David describes as "compile time instantiation" is essentially what our polyinstantiator does, except it has to be done at program link time. A potentially interesting strategy for dynamic link time instantiation has emerged that looks much simpler than JIT, but that's something for after R1.

With the addition of

With the addition of first-class polymorphic functions, and/or existential types (the latter can be simulated with the former), you can no longer easily specialize away type-classes. You can, but this requires a whole-program analysis and transformation akin to defunctionalization (e.g., turning method selection back into a finite case statement -- something you can do only after the program becomes closed.)

GHC certainly supports this and as a result, it's quite easy and normal to build heterogeneous collections of values paired with their type-class dictionary (e.g., [forall a. Class a => a] though of course, this should be an exists, not a forall.)

And Haskell does not lack side-effects (by a long shot). Indeed, it's one of the better imperative languages simply because it reflects most effects in the types. It's also one of the better ways to do "OO" programming since the programmer has the choice of factoring out the vtable/type-class-dictionary from the collection, or pushing it in with the individual objects (via an existential)
(the difference between Class a => [a] and [forall a.Class a => a].)
In contrast, "OO" languages do not provide the orthogonal constructs needed to factor out the vtable.

Polymorphic recursion

In fact, the presence of polymorphic recursion (which is in Haskell 98) already makes compile-time specialisation without whole program analysis impossible in the general case.

This seems unlikely

The instantiation can be performed lazily, in which case whole-program analysis isn't necessary. Or, depending on your point of view, it is being deferred.

Our currrent implementation simply does a demand-driven instantiation on a whole-program basis proceeding from main(). This is a good practical implementation for kernels, but probably not for application runtimes. What we presently intend for application runtimes is a hybrid approach (which looks straightforward, at least conceptually).

Partial answer on bitc-dev

rys: I put the start of an answer on bitc-dev, which you can find in the archive here. If you are willing to emerge from lurker status for a bit, I'll be happy to expand on that answer as needed.

that informed me a lot - thanks

Thanks for the nice long response; I wrote a longish reply on my blog (that's the long lived permalink — a temp copy appears just now at http://www.briarpig.com/, but it will disappear later).

I didn't see your post here until well after midnight this evening, so I decided to stay up and write back. But I won't check for typos and misspellings until tomorrow. I might also have to edit my tone since I lose a fine sense of how off-base I sound late at night.

BitC objects are misnamed

BitC objects probably ought to be renamed "capsules" to avoid confusion. The thing they provide that type classes do not is a form of existential typing. They have nothing to do with inheritance, and only a tenuous relationship to virtual functions.

Here is the problem: you have two drivers for different disk controllers. All disk controllers have a common interface, but each type of disk controller driver uses different internal state to do its job. You want a way to avoid exposing the type of the internal state at the interface boundary. With care, this can be done with closures, but in some respects that is an awkward solution. What objects/capsules provide is an alternative way to accomplish this. Secondarily (and there is no interesting semantics in this part) they eliminate the need (through v-tables) to carry lots of constant procedure pointers around in each instance.

A conventional first-class procedure object associates some closed-over state (the closure record) with a single procedure. A BitC object/capsule generalizes this to associating some state with multiple procedures, and makes it possible for the user to specify the state record explicitly rather than have it be implicitly constructed by the compiler. But the two are very similar. In fact, BitC procedure objects could (in principle) be built using the BitC object mechanism. The only missing piece is a cardelli-style representation optimization that hasn't been talked about on the BitC list yet.

All disk controllers have a

BitC objects probably ought to be renamed "capsules" to avoid confusion.

No, I'm not confused about your intent, re:existentials, I just think typeclasses already provide the abstraction you were looking for.

All disk controllers have a common interface, but each type of disk controller driver uses different internal state to do its job. You want a way to avoid exposing the type of the internal state at the interface boundary.

I'm afraid I still don't quite see it. As long as the typeclass parameter is polymorphic, the state remains fully abstract. For instance (I'm not a Haskell developer, so bear with me!):

-- 'a' is the internal state of the driver
class DiskDrive a where
  init :: a -> DiskDrive a
  -- second param:logical disk address, third param: memory address, return param:bytes copied
  read :: DiskDrive a -> Word -> Word -> Word
  write :: DiskDrive a -> Word -> Word -> Word
  teardown :: DiskDrive a -> ()

All code dealing with disk drivers uses this interface, and so is polymorphic over type the state of the disk driver, 'a'. Am I missing something?

[Edit: ok, re-reading your post, I think you actually want to hide the state 'a' so it doesn't show up at all in the interface to the disk driver, or any code that uses it. Is that correct? Are there some problems I'm missing with exposing the type param as above?]

To answer your very last

To answer your very last question. The code above is nonsensical. It's not just syntactically invalid*, but it is also unclear what you intend. Or rather, your intention seems clear enough to know that what you intend is not what type classes do [and as a side note 'type classes' not 'typeclasses'].

You may want to become more familiar with Haskell and type classes in Haskell.

* Well it could be syntactically valid if DiskDrive was declared as a data type somewhere, but I don't think that is what you intended.

You're right that I haven't

You're right that I haven't programmed in Haskell, and I made an error in translating the Eq type class to reflect the DiskDrive interface. Of course what I meant to write was something more like (ignoring setup/teardown for now):

-- 'a' is the internal state of the driver
class DiskDrive a where
  -- second param:logical disk address, third param: memory address, return param:bytes copied
  read :: a -> Word -> Word -> Word
  write :: a -> Word -> Word -> Word

This application seems straightforward enough: type classes abstract over the representation for a certain set of operations, in a similar way to existentials.

Eq seems a perfect example of this, where code operating on types implementing Eq cannot access concrete details about the representation of the abstract parameter. If I'm still off-base, where am I going wrong?

This is much better, but

This is much better, but it's still not clear how much you intend this to do. This code, for example, does not itself allow you to put a bunch of "DiskDrives" with differing internal state into a data structure, corresponding to having multiple different disk controllers. It also does not itself encapsulate that internal state. One way to accomplish both these things is to use an existential type. This is, of course, exactly what shap said needed to be added and what his "capsules" allegedly add. At this point type classes are not doing very much and in fact using an existential type over a record of functions may well be a better way to go. (In fact, for an interface as simple as the one described above, you can use the existential implicit in closures and you don't even need existential type support in your language.)

I'm no Haskell expert, but

I'm no Haskell expert, but couldn't you do something like

data FooDriver = FooDriver ...
data BarDriver = FooDriver ...

instance DiskDriver FooDriver where ...
instance DiskDriver BarDriver where ...

data DskDriver = forall a . DiskDrive a => DskDriver a

listOfDrivers = [DskDriver $ FooDriver ..., DskDriver $ BarDriver ...]

But perhaps that has some problems I haven't thought about?

That's an existential type

Your DskDriver in fact is an existential type, which is precisely what shap calls a "capsule".

I was going to post this

I was going to post this exact solution yesterday, but realized that BitC's forall is not the same as Haskell's forall, as BitC doesn't have higher-rank polymorphism.

It occurred to me this

It occurred to me this morning that one could also use heterogeneous collections so the type parameters don't need to be hidden. Not sure feasible this for all cases though.

In any case, my overarching point was that type classes and "objects/capsules" both provide overloading, but capsules additionally provide type hiding. There must be a way to unify since they're closely related (or simplify capsules such that they only provide type hiding and not hiding+overloading).

I find the need to hide type

I find the need to hide type parameters to be vanishly rare, and I typically do something like the following in Virgil:

class Box {
}

class TBox<T> extends Box {
    value val: T;
    new(val) { this.val = val; }
}

Virgil supports polymorphic type tests (<:) and polymorphic casts (::), so casting out of these boxes requires dynamic checks. Interesting, Virgil does not support static overloading. But this can be achieved with tests:

method f<T>(val: T) {
    local box: Box = new TBox<T>(val);
    if (<:TBox<int>(box)) { ... }
    if (<:TBox<bool>(box)) { ... }
}

This can get a bit tedious, but you get the idea. I've only run into a couple of situations where I really wanted this functionality, and it's not hard to start building some abstraction layers on top of this, e.g. a dispatcher method:

method dispatch<T, R>(val: T, f_int: int -> R, f_bool: bool -> R, f_def: T -> R) -> R {
     local box: Box = new TBox<T>(val);
     if (<:TBox<int>(box)) return f_int((::TBox<int>(val)).val);
     if (<:TBox<bool>(box)) return f_bool((::TBox<bool>(val)).val);
     return f_def(val);
}

Of course, you can go even further and parameterize dispatch with more type parameters, etc.

Fully abstract is the problem

Here is the heart of the problem: we need to carry around a pair consisting of a "pointer to state" and a function taking that pointer (actually, we need multiple functions, but start with just one). Given only type classes, we can assign the abstract type (ref 'a) to that pointer. There are two problems. The first is that abstract is not the same as existential. The type 'a is exposed, and it cannot change within a single instance of a containing structure. That is: the 'a escapes into the outermost FORALL of the containing procedure or structure, where what we need is for it to be fully opaque. The second problem is that the slot containing the state pointer is mutable and must be concrete, and no single concrete choice will suffice.

What capsules do is encapsulate the notion that the concrete type assigned to 'a is a private matter between the capsule and its associated function. If this is sufficiently encapsulated, then the type assigned at 'a does not escape and this leaves us free to re-assign that slot without breaking any rules.

Pointers?

Why does this pointer have to be mutable? Why don't you simply represent the value as a tuple of (value, dictionary), and let specialization do the rest of the work? When did pointers get into the mix? This is how Virgil delegates are implemented, and the type of the value attached to the delegate is never exposed.

Has-field is a type class.

Has-field is a type class. There is no lookup for this, just as there is no lookup for any other type class in BitC.

My mistake! I thought it was

My mistake! I thought it was distinct for some reason.

Only because a hack is

Only because a hack is needed to make identifiers a literal class. Because of this the compiler has intimate knowledge of has-field and implements it specially. It is a "closed" type class, meaning that the programmer cannot add new instances.

As to ACL2, the great

As to ACL2, the great strength of ACL2 is the directness of its embedding pun. That is hugely simplifying and powerful, and I don't think we're going to emerge with something as elegant as they have. On the other hand, there are some deficiencies in the ACL2 approach purely from a prover perspective that we have been hitting while we mechanize the confinement verification, so it's not obvious that there is a net loss here.

Could you please elaborate on the deficiencies that you've found?

Not the right venue

This isn't really the right venue for a discussion of ACL2. Suffice it to say that dynamic typing combined with complete functions is not your friend, and in some cases higher-order abstraction makes things much easier. High automation isn't as useful for big things as for small things.

Don't get me wrong here. If ACL2 will solve your problem, it's a heck of a fine tool.

I'll bite...

...switching to Coq, shap? :-)

Your such a tease, Paul.

Your such a tease, Paul. :-)

Yes, it looks that way. The only hesitation at this point is that a lot of related projects seem to be using Isabelle/HOL. We didn't enjoy the Isabelle/HOL experience. In a way that is hard to articulate (and may be wrong), the Coq tactics seem cleaner to us. I guess time will tell.

Ha ha, only serious

It might not even be the case that the existing Coq tactics are cleaner; it may be that it's easier to write your own, in Ltac, to solve your specific problems. Or perhaps you can rely on the work of others along those lines.

Anecdote: when I attended the Coq tutorial at POPL 2008 it was interesting to observe how many Twelf and Isabelle refugees seemed to be there. It was also great to have Adam Chlipala's generous help and to talk to him about his work on things like Lambda Tamer.

Design-for-analysis

I now think that having to develop and maintain mechanized meta-theory for a language is the most important kind of pressure one can bring to bear on the design. It enables so much in terms of communication and verification, but perhaps more importantly, forces the design to be simple and clean.

That's an interesting conclusion. It occurs to me that the same thing could (and should) be said of software design projects in general. Applying some kind of design-for-analysis approach tends to force the design to be clean and simple, because it makes the formal reasoning easier. And let's face it, if a design is hard to reason about formally, how much harder is it going to be to fully understand on a purely informal basis?*

I should note here that I'm not advocating full formal development for every piece of software under the sun. Shap's comment that "there need to be more points on the cost/value curve in formal techniques than 'all' or 'none'" is bang on, in my opinion. A programming language, as a tool that will be used to create many other pieces of software, should be held to a fairly high standard. But software produced with that language will inevitably have varying requirements for cost and reliability, and it'd be nice to have a language that provides support for using the appropriate point on the cost/value curve. On that note, I've been pleased to see several projects pop up recently (Opis is the one that most immediately leaps to mind) that integrate support for a model-checker into the language toolset.


* Of course, that's part of the appeal of functional programming and the minimization of mutable state: it makes both formal and informal reasoning about programs easier.


[Edit: I know I'm probably preaching to the converted here...]

With the benefit of

With the benefit of hindsight, I agree with Greg on this quite strongly. I'm hopeful that we're going to get away with proceeding ass-backwards on this in BitC because we have stuck closely to some prior work that has already been mechanized. But because we didn't do it simultaneously, there is a risk that we will have to revise something as we mechanize. I guess we'll have to see. There are only so many things you can do simultaneously. We felt that co-designing the kernel with the language was an equally urgent validation effort, and we didn't have the resource to do all three parts at once.

A few remarks

1) the stab at Linux isn't very interesting: is any of the *BSD kernels 'well structured'?
I kind of doubt it.. So this way of describing a limitation in BitC is poor taste at best.

2) the chapter 13.3 on GC needs to be reworked as its structure is weird.

the stab at Linux isn't very

the stab at Linux isn't very interesting: is any of the *BSD kernels 'well structured'?

I doubt they'd consider any non microkernel operating system well-structured, which would rule out every Unix, with a slight possible exception for Plan 9. Where you draw the line for microkernels is subject to debate, but clearly Mach ends up on the "not well structured" side, and L4 and EROS end up on the "well structured" side.

The article also strikes me as a very informal discussion, so I doubt they mean to publish it as anything more than a compendium of their opinions.

Basis for Comparison

Keep in mind that shap and friends are interested in security and verifiability. From that perspective, very few operating systems indeed should be considered "well structured." Certainly not any extant UNIX and certainly not Windows. That narrows the field considerably.

Some loose litmus tests

For verifiability, I think that three characteristics are essential. For each of these properties that are absent, the scale of the system you can verify (measured in lines of code) probably drops by a decimal order of magnitude.

  • Units of Operation Kernels have what might be termed ``preemption points.'' In a well-structured system, no resources are held by a process at any preemption point other than its own process state. This particularly excludes holding a stack while sleeping in the kernel.
  • Atomicity No application-observable effects on kernel state (modulo latency) are performed until all of the resources required for a given kernel unit of operation are acquired. Once an effect is performed, the unit of operation executes to completion without possibility of preemption.
  • Type-Stable Memory Once a type is assigned to a particular block of memory (e.g. by the storage allocator), that block of memory will never have any other type. In such systems, free() is never called.

The first point basically means that a well-structured kernel is a state machine, not a push-down automata. This has significant implications for the size of the state space that the prover must consider.

The first two points together effectively divide all kernel operations into non-interfering indivisible units. Since concurrency analysis must be performed against all pairs of indivisible units of operation, which is an N2 analysis, having individual instructions as your unit (which is true in Linux) makes things rather harder to analyze.

The last point provides a weak safety property and eliminates the need to consider some nasty aliasing possibilities.

By these metrics, EROS and Coyotos are well-structured, L4 (when last I talked with them in depth) is nearly well-structured (there is a detail issue with unmap, but it is resolvable), the Microsoft Hypervisor seems (from discusion) to be well structured, but UNIX and its derivatives are not.

Setting linux per se aside, how many developers do you know who actually know what a "unit of operation" is? The absence of certain kinds of structures among the design disciplines and patterns that we learn in computer science is quite striking. This is part of why software people have such a steep initial learning curve mastering processor architecture — the level of discipline required initially comes as a severe shock.

Is all this complexity necessary?

If we drop wanting to run arbitrary C code would it be possible to load only typed assembly language, verify that all calls to the kernel go through an approved interface, and then run it? This would seem to reduce the overhead of a system call to that of an ordinary function call, giving massive performance benefits.

This would assume as a

This would assume as a precondition that it is possible to assign meaning to C programs, from which you might derive the typed assembly. Unfortunately, there are over 400 documented ambiguities in C99. Most are innocuous, but some are important and badly understood by programmers (notably the sequence point rules). In consequence, it is possible to assign meaning to a restricted C subset as compiled by a particular compiler, but not to a restricted C program in abstract.

This is part of why static analysis tools for C aren't very precise.

Verification and Manual Memory

I really liked Shap's other paper on the basic requirements for a systems PL. After reading the specs for BitC, I have a couple questions.

It seems that BitC could be most concisely described as ML-light with Scheme syntax. I can understand that design direction if the emphasis is on verification, but one desideratum previously emphasized was idiomatic facility for explicit memory control. I was looking for that in the BitC description and didn't see it. Is it even possible to smoothly integrate manual memory control with a lazy functional language?

The other thing is the target use of BitC. It don't think it's ever said, but I assume the focus is embedded systems. Since people have been trying to write verified systems for at least four decades, is there enough accumulated understanding to be able to say that with such-and-such language it is realistic to aim for a verified system up to a specific size?

Quick responses

Since Bitc is a functional superset of ML (ignoring the module system), I'm not sure that ML-lite is a good way to think about it. A better way might be "ML with first class state and explicit control of representation."

BitC isn't a lazy functional language, but the idioms I had in mind are not things you build into the language runtime. They are disciplines for how you use that runtime.

Though Coyotos is targeting embedded initially, BitC is general purpose. That said, yes, there is enough collective experience in the field to have a calibration on the size of systems that we can currently verify. Coyotos is stretching that boundary, but only by about 20% compared to previous successes, and we are hopeful that between the idiomatic restrictions in Coyotos (see above about what "structured" means) and the safety of the BitC language (not fighting certain battles in the prover), we will be able to get there. We'll have to find out the hard way how absurdly optimistic we are being.:-)