Andrei's answer to "Which language has the brightest future in replacement of C between D, Go and Rust? And Why?"

Excellent critique of potential C/C++ successors (from the author of D, also top of /r/programming ATM):

https://www.quora.com/Which-language-has-the-brightest-future-in-replacement-of-C-between-D-Go-and-Rust-And-Why/answer/Andrei-Alexandrescu

Money quote:

A disharmonic personality. Reading any amount of Rust code evokes the joke "friends don't let friends skip leg day" and the comic imagery of men with hulky torsos resting on skinny legs. Rust puts safe, precise memory management front and center of everything. Unfortunately, that's seldom the problem domain, which means a large fraction of the thinking and coding are dedicated to essentially a clerical job (which GC languages actually automate out of sight). Safe, deterministic memory reclamation is a hard problem, but is not the only problem or even the most important problem in a program.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I don't agree...

Rust's memory safety stuff is a pretty potent enabling technology. Aaron Turon's Crossbeam library is a really nice example of that.

One of the traditional advantages of gc'd languages has been that gc makes it a lot easier to implement lock-free data structures in them. Intuitively, the basic idiom is to take a snapshot and do your work on that while CAS-ing in a loop, so that you commit if you succeed without interference, and throw away your work if there is -- and throwing away old work is a lot easier if you have a GC to clean up data for you.

However, in this blog post, Aaron show how to use Rust's support for linearity to make writing lock-free code about as easy as in Java, while still managing memory manually (and getting higher perf as a result).

(As an aside, Aaron really does live up to Andrei's description as a 10x theorist. If you have the chance to work with him, take it.)

This is about making a value

This is about making a value call on (a) safety vs. (b) programmer productivity vs. (c) run-time performance? What Andrei is claiming is that for most (though definitely not all) programmers looking for a C replacement, amping up (a) and (c) at great expense to (b) is not reasonable.

Rust really shows up the flaws of how traditional PL design fixates on theoretical aspects, semantics, expressiveness, and nothing really about programmer experience. If PLDI and POPL got together and decided to birth a language...you'd totally expect to get something like Rust.

I don't this is an accurate

I don't this is an accurate description of Rust at all. If that was true, it wouldn't have a big community of people coming from Ruby and JavaScript to program in Rust. Additionally, the Rust developers have put lots of effort into the programmer experience, which shows when you use the compiler or the tools, for example.

That the communities of

That the communities of people attracted to Rust come from the web world is not really surprising given that this comes out of Mozilla. Organizational culture trumps language purpose (just like Go programmers come from C++/Python worlds).

I guess it really depends on how you define the "programmer experience." How much mental effort from the programmer is needed to write Rust programs? How do they offload that effort with tooling? As far as I can tell, Rust requires a lot from the programmer, especially with its more strict typing requirements, and has little in the way of tooling to offload this mental effort back to the computer.

The Racket/PLT community understands this better than most; and I hope these debates come more out into the open (safety at what cost?).

What's the expense?

This is about making a value call on (a) safety vs. (b) programmer productivity vs. (c) run-time performance? What Andrei is claiming is that for most (though definitely not all) programmers looking for a C replacement, amping up (a) and (c) at great expense to (b) is not reasonable.

Relative to C, Rust wins hugely on all three of (a), (b), and (c). So I don't understand this criticism that much.

Can you suggest something you would have done differently to improve (b) at the expense of either (a) or (c)?

Why would you say Rust wins

Why would you say Rust wins over C on (b)? This is like claiming greater pain now will lead to less pain in the long term. It might be true, but it is never obviously so (and its usually a matter of faith that other sub communities are free to ignore). Unless of course you have an empirical study that proves one way or the other. Even (c) is not certain (Rust code isn't always faster than C code), but I'm not saying we should keep using C.

Rust makes programmers work at safety via types. Garbage collection is still a no brainer on productivity if you can handle its overhead.

But let's go back to Andrei's comment:

Safe, deterministic memory reclamation is a hard problem, but is not the only problem or even the most important problem in a program. Therefore Rust ends up expending a disproportionately large language design real estate on this one matter.

I guess the question is: for whom is manual memory management such a hard problem that they need rust...considering that it will dominate their program. Its not something I can just bite into or not if I need it. Or maybe this is unavoidable if one wants manual memory management to be safe; in that case, unsafe memory management (which still must exist anyways) is still competitive, maybe we should look at just less unsafe memory management.

Big productivity wins over

Big productivity wins over C:

* Algebraic Data Types
* True higher-order functions
* A real module system
* Package management
* Traits and expressive generic types
* A powerful, hygenic macro system
* Helpful compiler error messages
* Easy to use threads

Those are features. They are

Those are features. They are good nice to have features, but it doesn't say much about overall productivity. If the overhead of manually managing memory safely (or say, shari memory in a safe way to avoid race conditions) is too big, then you still aren't actually getting more work done.

I totally disagree--those

I totally disagree--those features are big increases in overall productivity. It might be that the cost of writing correct low-level code makes it overall less "productive" than writing incorrect code in C, but those features a major advantages in productivity for Rust.

You can't really state that

You can't really state that as a universal, at least without evidence. There will always be domains that screw up the "always better generalization".

So I've written only one big project in C, the Kimera Java bytecode verifier that got us (Emin Gun Sirer, Brian Bershad) a lot of press back in the day. There are definitely things done in Kimera that would not necessarily be easy to do in Rust...e.g. managing memories with arenas, casting from pointers into byte buffers to padded structures, then assigning them hand rolled vtables for processing.

Arenas quickly made ALL memory safety questions moot (we had to be memory safe, because we were interested in testing lots of malformed classfiles, which over millions of iterations eventually led to the vacuum bug, which made us rich and famous...well...maybe not so much). V-tables dealt with what would otherwise have been done with pattern matching, but I'm not sure that would have been so much better. When I moved from Scala to C#, I didn't find it to be so necessary (maybe 1 or 2% better, but definitely not hugely better).

Skepticism?

You're right we lack evidence, but I'm not sure you can argue it while claiming we know anything on productivity from PL — PL has too few empirical studies, and to my knowledge no reproductions.

According to either standard PL theory or programmer's gossip, those features are a productivity bonus. A C programmer who's hunting a memory corruption bug would probably also agree. Of course, that's still no proof, but can we even do a proper study?

insert more-is-better parable here

It's true that hunting down memory corruption bugs is a hit on productivity. I get assigned those sometimes. (Some folks have no chance at all of tracking them down, and don't get such assignments.) A big problem in debugging them, though, is coping with code that makes no sense at all, in which a memory corruption is lurking. By this I mean you spend more time looking at code to figure out what it does than it takes to write in the first place; debugging once can exceed cost of rewrite, if you only knew what it was supposed to do.

When I'm not debugging memory corruption, the biggest hit on productivity is the same: figuring out what code is supposed to do, both in the large and in the small. Sometimes nothing but tracing all the code and reverse engineering the purpose is the only thing that explains intent better than vague hints given by names. Nailing down the last twenty percent of semantics is hard. And usually it's inconsistent, because some gnarly details are often developed by different people, who held varying theories about goals and methods, so plans are mixed and matched and don't agree, resulting in subtle bugs. Such disagreement bugs are so common, it prevents solving for intent when reverse engineering, so you have to think of a better plan that would have worked, just to nudge code into a stable state.

A downside to making devs more productive in quantity is you'll get more of this, taking longer to debug, for a net increase in costs. So you also want better quality at the same time, not just in fewer memory corruptions but more crucially in having better clarity of organizational intent. The idea of merely taking away crashes due to memory corruption is terrifying, because this random penalty helps stop folks from checking in larger messes more frequently, which they would love to do. Tech enabling sloppy thought is probably a trap of some kind to be avoided. There's a risk of making bureaucracy creation easy, which is obviously messed up, but hard to refactor. You want a metric of goodness proportional to power (clarity and effectiveness) divided by lines of code, instead of lines of code divided by crashes. More in volume is not better.

It is really impossible to

It is really impossible to claim that X is "obviously better" than Y, even for the crustiest languages possible (R, MUMPS, PHP, Perl, Cobol, Assembly); they all have use contexts where they are actually the better tool then more well-designed language Y.

So at the end of the day, it really depends what you are doing. It is possible to write C code and not be bothered by memory corruption bugs (they occur of course, but could be easy to debug, and your memory architecture could be fairly straightforward). The question then becomes...how much of my pain is dominated by the problems with language X, and how much of it is fixed by Y in balance with the trade offs it takes?

Hence why we are talking about Leg Day at all. Is Rust balanced enough to supersede all uses of C (keep in mind that C was an application language long before it was relegated to systems language status)? Was that even a design goal for them?

Perhaps I should only be looking at Rust if I have lots of memory corruption bugs but I can't take the perf hit and use a GC'd language? Maybe, I don't really know what the language designer intentions were.

No, it's less pain now

Why would you say Rust wins over C on (b)? This is like claiming greater pain now will lead to less pain in the long term.

Rust's borrow checker basically checks correct usage of the standard C/C++ memory-handling idioms (ie, unique ownership, stack allocation, and reference counting). This creates a much a tighter/faster feedback loop: rather than weird memory corruption errors, you get compile-time errors with reasonably helpful diagnostic messages.

This is in addition to all of the features that Sam pointed out (which I agree with) -- and I should note that every one of those features has at least as much empirical evidence for its benefits as garbage collection does.

I guess the question is: for whom is manual memory management such a hard problem that they need rust...considering that it will dominate their program.

Basically, anyone in a memory-constrained environment.

Once you have twice as much RAM as your working set, then gc is competitive with manual memory allocation. If you have less performance falls off a cliff -- 60x slowdowns -- because tracing interacts pessimally with the LRU algorithm that kernels use to determine which pages to swap to disk. Basically, your gc roots are the hottest data in the program, and since you begin tracing there, LRU means the kernel will put your root set onto disk.

(Note also that if your program is one of many running on the computer, you may have a lot less RAM than the computer does.)

GC is important for functional data structures

One of the traditional advantages of gc'd languages has been that gc makes it a lot easier to implement lock-free data structures in them

I always thought that GC was most important for implementing purely-functional, immutable data structures, that are "updated" by modifying only a part of the data structure and sharing other parts.

I imagine that even a list would be very hard to implement in a non-GC language (while still remaining useful), let alone something more complex, e.g. a tree or a hashmap (which are usually implemented as tries).

Edit: actually, Crossbeam's documentation provides some explanation of why lock-free data structures are hard to implement without a GC.

If you are using purely

If you are using purely functional data structures, you probably don't care about lower-level high performance and GC is completely acceptable. (we can argue if functional data structures are useful for high-level language performance, but that isn't where Rust is anyways)

something about the issue seems moot

Note almost pure functional data structures can also be done in imperative languages like C as well. (The almost is some top-level index of the data structure gets updated, though not content inside, that might be shared with anyone who can see it. The index is private and safe to mutate.)

This is suitable for things, say network switches, needing high performance at a low level, provided copy-on-write with refcounting is used. The CoW is what gives almost pure functional, with RC as the GC mechanism. I can never figure out what these sorts of discussion aim to be about, because the issue of GC vs non-GC seems irrelevant. You can always pursue a policy of not updating shared state, with no locking involved for locally visible CoW-patching that is not shared.

One day a student came to

One day a student came to Moon and said: “I understand how to make a better garbage collector. We must keep a reference count of the pointers to each cons.”

Moon patiently told the student the following story:

“One day a student came to Moon and said: ‘I understand how to make a better garbage collector...

parables of infinite regress

I know this isn't traditional form.

Stu: Let's find this Moon guy and pop a paper bag behind him.
Ned: He's actually a pretty nice guy, give him a break.
Dex: What was the point of that story? What was the nesting?
Wil: That was a cycle, and refcounting is bad at collecting cycles.
Ned: Right, so don't make strong-ref cycles, only weak-ref cycles.
Stu: What if I don't realize I'm making a cycle in refs?
Wil: Oddly enough, you can write code to walk a graph and find cycles.
Ned: And then hopefully generate a fatal error to punish you for it.
Stu: Got a paper bag with your name on it too, wise guy.

of course, it depends

There are cases where purely functional data structures and GC can lead to better performance. In fact, performance so much better as to enable previously impossible usage. You may just need to be willing to compare apples and oranges, since our goal is a whole farm independent of individual fruit.

Consider Photoshop's history brush: It's basically super-fast per-pixel arbitrary-depth undo. It works via a persistent data structures, presumably something along the lines of a quad-tree representation of the image, with structural sharing for historical versions. Just as Clojure's "Transients", the "top" of the undo stack can be editable, but be encoded in such a way as to be made persistent in O(1) time.

Also consider how reference equality checks make Om / React+Immutable.js faster than naive React.js out of the box with JSON data: http://swannodette.github.io/2013/12/17/the-future-of-javascript-mvcs/

I found it too light

The short blog lightly touches upon the biggest strengths and weaknesses of various languages with a few bullet points, but I found it somewhat superficial.

Personally I estimate that D is somewhere in the middle of C and Java, and lacks enough compelling features to replace either. It's a shame Java isn't discussed but I guess that is because the author proposes the idea of D as a better C, not a Java substitute.

This is an answer on Quora,

This is an answer on Quora, not a blog entry, and the question did not ask for comparisons with other languages out of the three.

Ah. Very true.

True, but if you read that he also moves into the direction of D as a ''garbage collected C++,'' I still consider Java to be the elephant in the room. It warrants to be treated as a contender.

Quora woes

I find it frustrating that people would write and grow their interesting comments inside for-profit walled gardens, when the open web has produced many good places for these interesting discussions with good level of discourses. Are those high-profile people paid by Quora, or do they just not care about the consequences of their technological choices?

(I asked other people in the past and they mostly said "meh, it's convenient and I don't choose where people will ask the though-provoking question".)

Ada?

Playing with Rust a bit actually motivated me to start investigating Ada instead. It seems to me that, aside from movement semantics, Rust offers roughly similar featuresto Ada. Except that Rust doesn't have the track record that Ada does, and it doesn't have the focus on large-scale software engineering and readability that Ada does.

Of course, Ada has been around a while, so if it was going to become a successor to C one would think that it would already have done so.

Ada could have a nice core

Ada could have a nice core language, but it has too much concept proliferation for me. Some of them are well-motivated, particularly at the time, but the plethora of similar but slightly different concepts scream for more general abstractions to reduce the mental burden. I think Rust achieved a good balance here by comparison.

i should make snake oil money

by selling a fake Rust to Ada compiler.

Memory management in 2015

I pretty much entirely agree with Andrei on Rust.

It's 2015 and we are still lacing our programs with memory management logic, obsessing over stack allocation of objects, trying to pool objects, implement custom allocators, enforce some ill-conceived notion of linearity, and use object lifetime to try to manage the lifetime of very non-object things like database connections and files. We're worrying about too many fine-grained details, sharing too much mutable data, and overall working at the wrong abstraction layer.

Rust takes the memory management community squarely into a new, seemingly fruitful research direction. But in reality it codifies a dead end idea; the assumption that the average programmer actually has any clue about how to manage memory, especially in a highly malleable and evolving large application. So it seems like a step forward but it's really a step sideways into a new era of annoying and confusing programmers with mundane details that they've time again proven they're neither particularly interested in or good at. Devising a new language that slaps programmers in the face more often isn't a solution.

Managing memory is better done automatically. It's the inevitable evolution of humanity's struggle to making programming tractable at scale and we should stop the sideshows and focus on making fully automatic techniques hit the performance metrics appropriate for each use case. And we should stop the FUD about GC and build tools for programmers to find and correct performance bugs in their programs surrounding GC which almost almost arise from sheer sloppiness in over-allocation. That way programmers can get back to worrying about more important design problems like how to better modularize, test, and scale their programs.

wanted: animatronic disembodied slapping hand

Until the automagic system can actually solve our problems (edit: or until we all know how to use what we already have to have that, if our tools secretly can do it but we're most of us just too ignorant/dumb to know), we do need to be slapped in the face. Unfortunately. E.g. talk to a video game developer (hi there!), or mobile developer (hi there!), or anybody who cares about always responsive ux (hi there!) and then come back and tell me their worries aren't warranted. In other words, while I very strongly do agree with you, on the other hand, I strongly disagree. The truth for such things is 9 time out of 10, "it depends." And the state-of-the-art is not up to snuff for anything other than some well-heeled server-side stuff.

I'll go with Gibson on this

I'll go with Gibson on this one. The future is already here, it's just not evenly distributed yet.

GC has made a ton of progress in the past 15 years. Hard real-time collectors with worst-case latencies in the sub-millisecond range have been developed; they're just not widely deployed. Concurrent collectors for terabyte heaps with soft real-time guarantees measured in milliseconds exist, they just aren't everywhere. And incremental collectors for single-threaded programs that still achieve a smooth 60fps exist, too.

There is already a lot of great GC work and judging from long-term trends, challenges just keep getting solved. I wouldn't bet against GCs getting better. And whole-program region analysis and escape analysis keep getting better too.

The real problem is a either-or mindset driven by both ends of the spectrum: sloppy Java and JavaScript programs that gorge themselves on memory and just expect magic from collectors to the polar opposite, programmers that consider themselves heroes for intricate manual memory management that saves a little bit of memory but leads to literally tens of thousands of security vulnerabilities.

Some GCs are hamstrung by having to deal with a mix of explicitly managed memory within a single app (e.g. Android) or other language features like interior pointers or finalization that generally make life difficult. If we're in the language design ballgame, these features should be avoided at all cost to preserve our best bet, which is an advanced garbage collector.

The more sophisticated the

The more sophisticated the GC, the larger the TCB. It becomes a not-insignificant part of development time spent on the language. There's still a need for a systems language that can make it easier to handle these low-level details and ensure they're correct.

Furthermore, that will arguably never go away. Interest in microcontrollers with 100 kB of memory is only increasing, which means a systems language is still a growing market.

100kb = luxury

For these small microcontrollers, programs typically just preallocate everything and never allocate anything dynamically. E.g. the first version of Virgil targeted Atmega128, a 4KiB RAM device.

Many different garbage collectors.

Is writing the GC in 'C' acceptable? Personally I think a language with explicit memory control is needed to write a good garbage collector. I also think like the no free lunch theorem in optimisation, no GC is optimal for all problems. Hence a language that allows domain.specific garbage collectors to be written by experts and used by other programmers is the way to go. This is why I think a language to replace 'C' cannot be garbage collected (or at least have one true garbage collector included in the runtime, where you cannot inspect, or replace it).

My solution is a language with no built in GC, but the with necessary hooks to allow efficient GC to be implemented in a library (this would include some kind of reflection so allocations can be coalesced etc). Then a few starting GCs can be included in the standard library, but of course the community can start to add new better ones for specific domains.

Honest question: could a

Honest question: could a garbage collector be written in Rust without resorting to unsafe code? It seems like some problems would defy memory safety no matter what generic technology you had.

That Depends.

It depends on what you mean by memory safety. Memory can be modelled by an array, so a program that allocates a block of memory at the beginning that is big enough for it to complete its task, and the frees is at the end is safe. You could then write a garbage collector for the array cells, however the problem is then different, you would need to show that no live reference is discarded. This would suggest that the type system needs to be capable of expressing reachability proofs, such as given a set of roots, R, prove algorithm X visits all nodes reachable from R for any graph, and prove algorithm Y deletes only the nodes not marked by X. Off the top of my head, this suggests some kind of equivalence class, such that all nodes reachable from node R1 are in the same equivalence class. We could then express a function type that takes the roots, and returns their equivalence classes. The GC function would have a type something like, "given a container and a list of roots in the container, it will return the container with only elements in the same equivalence class any of the roots".

I don't know enough about Rust to say if its type system can express this type, but I would guess not. You probably could express this using dependent types in a proof-assistant. I think the proof would not be fully automatable, but as I think new GC would only be written by experts, some strategy annotations would be acceptable. So I would expect a GC library to consist of the code, annotated with sub-proofs, and strategy hints, along with the final 'contract' type it is being proved to comply with. I would expect this to actually be checked by the compiler, not by an external tool.

That makes sense. At some

That makes sense. At some level of complexity, dynamic analysis is needed (as the VLIW people sorely found out). Many operating systems memory management problems are quite similar to GC, so I really wonder how they are hoping to write a safe OS with Rust?

In the same vein, it will be interesting to see how Servo turns out.

potential holes from unexpressed things

It does seem like some problems defy memory safety, in the sense handling them requires an unsafe api (imitating C style access), because checking results via typing is hard as noted nicely by Keean. You would have to proceed by saying: this seems dangerous, but I'm sure I captured the requirements in types correctly. If you missed something, say due to a subtle bug in your plan, types probably won't tell you a part of the unexpressed plan is missing.

Suppose your plan needs two things, A and B, to be true in order to work, but you focus on A as the main issue and don't really notice B. Then you express A with types, so this gets checked, but B is not checked when never described. You might even achieve B correctly, perhaps by intuition, but in a way no one notices must be maintained in future changes. Mysterious failures might occur after maintenance without type system based explanations. A false sense of security might come from type drama.

That was vague, but a hypothesis about potential holes can be fuzzy:

Ned: What if the water in that vessel leaks out of a crack?
Stu: Which crack do you mean?
Ned: There might be a crack, perhaps a fine one hard to see.
Stu: Unless you show me a specific crack, I'll assume there isn't one.
Ned: You can't be serious.

there is no panacea, but it goes better for me with types anyway

a) I've had the exact same thing happen multiple times to me and co-workers with (e.g. unit) tests.

b) So when you find out you goofed, you add the type! And that helps make the fixing go faster and better.

I believe that the Cyclone

I believe that the Cyclone people wrote a mark-sweep collector safely in Cyclone, but were unable to write a generational one. That's 3rd hand, though.

I wrote the Virgil garbage

I wrote the Virgil garbage collector for Virgil in Virgil. It required the addition of an unsafe Pointer type, which can be used to load and store primitive types to memory. The GC simply does its work with raw pointers into memory. The collector knows the start of each of the spaces. To find roots, the compiler provides the GC a bitmap for the initialized data section's mutable references. The GC can walk the stack using raw stack and code pointers by interpreting frame information generated by the compiler. It doesn't need to reflect on objects; it can simply decode the bits which the compiler emitted into (fixed sized, readonly) metadata tables that describe where references lie in each object type.

Remark

But just to be sure, that doesn't qualify as a library, though, right? The GC still has intimate knowledge of, and specific support from, the compiler and runtime, and vice versa.

Just saying that because Keean keeps bringing up this folly of GCs as random user-land libraries. Not to give any wrong idea like that this would actually be realistically possible, let alone efficient... ;)

Indeed

The compiler and GC communicate through data structures generated by the compiler that have a very specific and dense encoding. The GC knows the heap memory layout and the compiler knows what, if any, write barrier the GC needs. The GC gets help from the runtime, which can walk the stack, and the runtime gets help from the compiler in order to do that stack walking.

I am also skeptical about a magical interface there. It's just not a dimension of extensibility that will be stressed enough to try the very hard problem of making a general interface.

It's something some Rust

Designing GC as a library via high-level interfaces is something some Rust devs are attempting now in fact.

Pessimal GC

What do I do if a language's GC is pessimal for my application? Would you suggest I use a different GC'd language? What if they all converge on generational mark/sweep, or worse all share a GC from a framework like LLVM to avoid writing their own. It would mean I have to write my own memory management, and obscure my code with concerns like coalescing allocations. I would rather separate the memory management from the application code, so that the algorithm is readable, understandable, and maintainable. To do this the custom GC needs access to the right information.

I don't see the problem with reflection. The stack can be a visible object with an abstract interface. Objects can have an abstract interface for getting references.

T written in T

The GC for T, an early Scheme dialect, was written in T. Of course, the people who wrote it understood when the T compiler would allocate and when it wouldn't, which is not a trivial matter in a language that depends as heavily on closures as Scheme does. They also wrote the system in such a way that it could handle interrupts between any two machine instructions in user code with no risk of losing the plot.

Freeze dried Gibson head for sale!

What flavour is it?

It's a freeze dried head, innit? Bloody Gibson flavour!

Wake me up when the Vanilla Sky future is here and I can actually hit my project goals with the GCs available to me and my customers.

Make no mistake (heh, I utterly loathe that phrase, I just had to use it), I actually much prefer living in a GC world! I wish I really could stay there all the time! e.g. My favourite language on earth is SML! But the GCs just aren't up to the task.

P.S.: If you are coming from an angle like, "what should an aspiring language developer do today?" then I do not at all mind if they say, "forget about uniqueness typing, i'm just going to implement every nuance of everything Bacon@IBM ever wrote" and that's GREAT in my book. (Dunno when it would ever actually ship, tho. :-)

C++ has too many problems

These languages are failing (to completely replace C++) because they're trying to fix too many C++ defects at the same time. Go fixes C++'s non-existent module system and absurd compilation times, but it also "fixes" the lack of GC, and loses generics. Rust fixes the module system and the lack of GC and important safety issues but fails to fix the absurd compilation time.

Obviously, there is a class of user (which is apparently large) that doesn't really care about safety or the pitfalls of manually managed memory allocation, but does care about uncompromising performance.

It's unlikely that one language can completely replace c++, however some combination of these (and other) better languages (with the right big-software-corp support) probably could.

But it seems the (maybe unsafe)-no-gc imperative language with a C-like syntax that compiles fast and is portable with uncompromising runtime performance and with good IDE support is a missing link. Such a language is still obviously flawed (from an Ltu point of view), but would be a _huge_ improvement over C++, which is just shamefully, laughably poor.

Personally I think Rust has a chance to do that long term - and provide safety, and doesn't deserve the bashing it's getting in this thread.

Leg day is just about

Leg day is just about balance, it's not bashing. Going back to Andrei's post, the criticism is that Rust puts memory safety front and center over productivity and accessibility. It is a valid point, and mostly an economic one. Who is to say how Rust evolves. If it attracts a certain crowd, it could get stuck in a very narrow safety niche.

What today is the benefit of writing a game engine in Rust?

Going back to Andrei's post,

Going back to Andrei's post, the criticism is that Rust puts memory safety front and center over productivity and accessibility.

That's a mischaracterization of Rust's goals. Rust is designed to increase productivity in domains where manually managed resources are necessary, in which case emphasizing the safety of deterministic acquire/release semantics on resources is critical to productivity.

manually managed resources are necessary

and, even most languages with GC still fit under that rubric since few give us GC for e.g file handles.

Hmm, if that's so, the whole

Hmm, if that's so, the whole question is kind of a red herring. So if you misapplied Rust as a C-substitute, you could get burned depending on the domain you were using C in.

Not really?

OTOH, if before Rust you had a good point in using a non-GC language, because you needed deterministic resource management, Rust is for you. If you need C for other reasons, Rust might be a bad idea.

Note that the original

Note that the original question asked to compare Rust, D, and Go as C successors, so the question already left that open (D and Go have GC).

If you misapply anything

If you misapply anything you'll probably get burned, so I'm not sure exactly what you mean by that.

The question of replacing C is still valid, not a red herring, and Rust is still a viable answer. I'm not sure what you mean by that either.

Benefits

I don't think it's viable at the moment as it lacks (at least):
1) Fast compilation
2) Broad big-software-corp support
3) Portability to consoles
4) Credible IDE support
[Edit] I forgot to mention:
5) A working debugger - C++ debuggers are highly unusable: for performance reasons UE4 uses "Development" mode which means compiler optimizations are enabled and debug information is also generated - the result is many functions are inlined and variables optimized away which makes debugging (again laughably) a hit-or-miss "programmer experience".

Each of these seems solvable, but who knows.

All I can say is it's incredible to me that C++ is still the only option.

re: C++ is still the only option

Yes/no. It is the only one even if it is in some senses the "top" one. I've done cross-platform development and "just use C++" is not an instant panacea there. So "just use Java, or Ocaml, or Haxe, or anything that targets LLVM" is possibly not *so* much worse that it is to be utterly disregarded.

The issue of 3rd party libraries is to me the strongest lock-in for C/++.

UE-4

Evidently nothing that you mention was considered a viable option by Epic.

yup

and the same for other companies as well. Sad, especially so since Mr. Sweeny works on making better languages and tools. I wonder to what degree it is due to 3rd party libraries? 3rd party tools? To being able to hire people? To what the video game industry already uses predominantly? Maybe with enough 3rd party static and runtime analysis tools applied to C++ it becomes survivable?

There are many different

There are many different ways to use C++. With smart pointers, for example, memory management isn't such a big deal (and the perf costs are at least mostly real time). Then of course, OO is still a big thing in games where it actually makes sense.

C++ compile time

It takes over an hour to compile the engine - with 4 parallel tasks. You tell me if that's "survivable". Well, it is for users since they don't compile the engine. But I'm sure Epic developers wouldn't mind something better - I know I'd appreciate it. Anyways, at least I can get my leg workout in while its compiling after I change one line in a core header file...

I think Google solved this

I think Google solved this problem internally with some magical build reuse/caching. Such clever hacks would apply to Rust also, I guess:

http://google-engtools.blogspot.com/2011/09/build-in-cloud-distributing-build-steps.html

One of the complaints about the Scala compiler is that it is too slow. But when I was working on the IDE, I found it rather easy to make it fully incremental in a pretty direct fashion (basically, trace AST tree dependency, only re-type check what could have been affected by a change). Of course, there is a tradeoff with memory here, but not much above what the Scala compiler already uses for the resident compiler (which is file-level incremental, and much more conservative with dependencies).

Google solved this problem

Is "solved" the right word? "Insane workarounds" comes to mind. Anyway it won't solve the problem I mentioned. The dependencies are real and no cache can help in this case.

People tend to have the same

People tend to have the same dependencies. And going from build X to build Y tends to be a small delta. Build systems can be made smarter to exploit that. Now, how much work and if it is worth it or not is another matter.

Tools matter. If you design a language, you can make trade offs between the language and the tooling that will support it. Making the language more modular means your tools have less work to do in achieving things like separate compilation, but it definitely isn't the only way.

C++

C++ has fundamental flaws that ensure its dismal compilation speed. Your remarks although true in general can't counteract that.

Have you looked at Bazel

Have you looked at Bazel yet?

http://bazel.io/

The problem is C++ include files and templates

Andreas's comment

I have my sincere doubts that compilation complexity has ever been a serious concern in the design of C++, given that it is the language with -- by far! -- the most epic compile times I have ever worked with. Far past anything that would qualify as "acceptably fast" even in its current state. Case in point: a significant amount of the internal infrastructure at Google is devoted to vastly parallelising, globally caching, and otherwise improving build times for C++ projects. C++ would be flat out unusable at that scale without all this infrastructure -- because include files make compile times quadratic, and templates make them exponential.

As I think I mentioned elsewhere, UE-4 has a home-grown module system which prevents you from mistakenly including header files from modules that you don't depend on, as well as pre-compiled headers, parallel compilation. So, again, in the above case the dependencies are real - the number of files affected is not huge, 800 or so, but it takes an hour to compile them - not due to incorrect dependency handling but to the sheer (and unavoidable) quadratic or worse cost of processing header files.

[Edit]
Another comment of his in the same thread describes C++ as an

arcane monument of accidental complexity

LOL. Sadly that's not hyperbole. It's a daily don't-know-whether-to-laugh-or-cry "programmer experience".

I'm not really a fan of C++,

I'm not really a fan of C++, I definitely don't want to be using it myself! I just subscribe to the holistic idea that languages can't be considered in isolation. So you could design a language, say like Rust, and use clever compilation techniques to achieve compilation speed requirements. The PL field has been incredibly bad at leveraging incremental methods so far, and there is a lot of room for improvement in the way we build our tools!

Early adopter vs stable

Concerns 2, 3 and 4 are relevant for some users, but they aren't intrinsic to the language — they just mean that Rust didn't succeed yet, so it's still for early adopters ;-). Almost every language has such a phase.

Concern 1 (fast compilation) seems more intrinsic (though it's also partly due to young tools).

Who will the US Government and EU pick?

That is frankly who wins.

is there no nuance there?

er... whatever happened to ADA? I only half mean that facetiously. I mean, did industry really take up ADA when it was "big" in the US Gov? And did the US Gov drop ADA because industry used C/++?

Ada lacks good container abstractions

I like Ada, but I ran into problems trying to implement generic containers. I found it almost impossible to implement safe, efficient, and simple container abstractions. In the end I had to resort to pointer manipulation inside the containers, to get things to perform equivalently fast as C++, which is considered bad form in Ada. The fundamental problem comes from thr overhead of array index calculation. For example consider trying to apply a 3x3 convolution filter to a 2D array. 'C's pointer arithmetic seems optimal for this, as you can have a 9 element array of memory offsets. You need to ensure 2 things (best done in the type system), that the pointers align with the start of each element (handled by 'C' semantics) and you don't access outside the array memory (not handled in 'C'), ideally by statically proving it cannot happen, rather than automatically inserting runtime checks.

"commercial best practice"

Ada was popular for a while, but the DoD's shift towards letting industry do its own thing instead of imposing MIL standards (a shift to so-called "commercial best practices" that was supposed to save money but probably didn't) caused Ada to fall out of favor. I can recall talking with a flight software developer from Hughes Space who was in the process of moving his software development from Ada to C++. The rationale, at least according to him, was that the company found it easier to find C++ developers than Ada developers. In other words, it was purely a business decision rather than anything to do with the quality of the language. One might argue that any C++ developer who couldn't easily get up to speed on Ada probably isn't a developer you want on your team anyway, but apparently that wasn't an argument that carried a lot of weight.

Having said all of that, Ada is still used for some safety-critical systems development, and the language itself is still actively maintained (the latest standard is Ada 2012).

Self-driving C++

I am still quite surprised that codebase for the Google self-driving car is mostly C++ (AFAIK, I haven't looked). There is a lot of testing, but even so, I would have expected safety-critical code to be written in a more robust language.

F35

The F35 too.

A majority of the logic is

A majority of the logic is machine trained anyways. Most safety issues are stochastic, so what would the safer languages buy you?

A variety of high-profile

A variety of high-profile bugs like the Ariane 5 would tend to disagree with you. Code for rocket engines and safety systems isn't trained by machines but written by domain experts who also need to summon heroic software engineering talents. Handing them a C++ shotgun and telling them machine learning will fix their problems is a recipe for a nightmarish future.

Security and safety both share the property that defense-in-depth (i.e. checking at every level) is needed, and a robust and rigorous language is a key piece of the puzzle.

Again, those systems weren't

Again, those systems weren't heavily trained. They were fairly symbolic systems written the old-fashioned way by people. They weren't interfacing with opaque models trained for hours and days on a huge amount of corpus. The conversion between centimeters and inches simply doesn't exist or even make sense in those models, unlike in the Ariane 5 disaster. So what is the C++ code doing? I'd assume it is very much like speech recognition, where signals are processed by models, and the code simply routes them (possibly on GPUs) and does schotastic safety checks. A safe language doesn't really provide much help at that point.

Writing these systems by hand in any language, even a safe one, just means they don't get written at all. The Haskell self driving car is a non starter. There is so far no connection at all between ML and PL theories that would change the situation.

Ariane 5

IIRC, Ariane 5 was a numeric overflow problem, caused by reusing Ariane 4 flight software on a new platform that had much higher accelerations. The metric/imperial conversion problem you're thinking of was Mars Climate Orbiter. For the record, the problem there was pounds vs newtons (force commands to thrusters) rather than inches vs centimeters.

Despite some claims to the contrary (from, for example, Bertrand Meyer), it's not clear to me that either problem would have been prevented by using a "safe" language. Both problems were the result of encoding incorrect assumptions about the world, or about what the software was interfacing to, into the software, rather than a failure of the program logic. Although I suppose that one could argue that a "safe" language would have forced the developers to be more explicit about their assumptions, which might have helped identify the problem before it resulted in a failure.

yes

that is the argument. :-)

Yes, but

Yeah, but it seems a tenuous argument.

Just because the Ariane 4 code included a specified range for accelerations, or an assertion that it couldn't exceed some value, would the developers have necessarily thought to check it? I'm unconvinced, since they didn't think to check that their calculations would stay within valid numerical limits. The fundamental problem there wasn't the code, it was that someone didn't think about the requirements the code was originally meant to meet, and how those differed from the new requirements.

The same goes for the MCO unit conversion. My understanding is that it involved a data transfer between two separate applications. If anything, it was a protocol design failure (the protocol or file format didn't include a units specification), rather than anything that would be caught by better type-checking.

Don't get me wrong, I'm all for safe languages and better type-checking. Those things certainly do catch some kinds of errors. I'm just skeptical they they would have helped with the Ariane or MCO failures.

The things that need to be

The things that need to be checked in a safety critical control system go way beyond what type checking can offer. You need lots of tests, you need code review, you need audits on every change, you might even use formal verification. Once you get to that point where it is going to be very expensive anyways, what is the additional value add of a better type checker?

Safe languages fill a gap between systems that need little safety, and systems that need extreme safety at any cost. They actually make a productivity argument (as much safety for free via the type checker, you can be less careful and the compiler will save you) that doesn't always make sense (you need to be careful anyways, you can't rely on the compiler, the compiler can't lull you into a false sense of security).

One of the problems we have in the programming community is that we often see programming as one monolithic activity. But there are many things called "programming" that involve drastically different processes and outcomes. They aren't really the same thing. Programming a robot/autonomous system, for example, is really a different kettle of fish, especially these days where much of the work is offloaded to ML.

The things that need to be

The things that need to be checked in a safety critical control system go way beyond what type checking can offer. You need lots of tests, you need code review, you need audits on every change, you might even use formal verification. Once you get to that point where it is going to be very expensive anyways, what is the additional value add of a better type checker?

This statement is not logical, since by the same line of reasoning almost any automatic check has little "additional value". Why not disable them all? Why even have a parser or basic lexical checks?

The statement is also false on its face, since you mention formal verification, which is vastly improved by having even minimal type safety, and generally, the better the type safety, the easier the formal verification.

The additional value is

The additional value is close to zero when everything has to be audited closely by hand, or you have to use expensive external techniques anyways. Sure you could have them, but the extra safety you get with Haskell vs. C++ is really in the noise at that point, so why pay the cost? Heck, it wasn't too long ago that even programming in assembly made sense (given that compilers couldn't be trusted anyways).

Fighter jets don't have seat belts, they have full safety harnesses and ejection seats. The value of having a seat belt is significant when you have nothing else, but when you have other things...seat belts are a cheap solution that give you a certain level of safety, but they are not additive with higher forms of safety. You don't combine low grade solutions with high grade solutions, because even the low grade solutions entail risk (abstraction...).

Fighter jets don't have

Fighter jets don't have seat belts, they have full safety harnesses and ejection seats. The value of having a seat belt is significant when you have nothing else, but when you have other things...

We should probably avoid reasoning by analogy, since it can lead us astray. And here's why. You seem to be saying that a strong type system is like a safety belt, and further that is somehow superfluous when you have a full safety harness. Well obviously a safety harness is an upgraded, stronger, redundant, multi-point safety belt. So it sounds like you're actually agreeing with me: we should have upgraded, stronger, redundant, multi-point type systems.

On the other hand you might be saying we don't need seat belts and should use airbags. This is probably why we should avoid analogies.

Fine. We have not only

Fine. We have not only upgraded to a better solution, but we have avoided the dangers of a lesser solution. I think it is pretty obvious why safety critical software isn't written in, say, Haskell, or any other language that relies on heavy abstractions, for that matter. Haskell provides safety for a relatively cheap price, but sometimes you need expensive safety instead.

Rust a good case in point and is also quite strange: on the one hand, it eschews automatic memory management, an abstraction that applications can't afford. On the other hand, they go all in on lambda and functional indirection...

I think it is pretty obvious

I think it is pretty obvious why safety critical software isn't written in, say, Haskell, or any other language that relies on heavy abstractions, for that matter.

But safety critical software is written in Haskell, for example, via the copilot DSL. It's strange that you attribute the reason that Haskell isn't used in these domains is abstraction, when it's clearly simply inertia. Otherwise Ada would be everywhere in that domain.

Abstraction is a double edge

Abstraction is a double edge sword: it makes programming more flexible and powerful but it has a complexity cost in analyzing a system. 99% of the time, we say that cost is reasonable, but sometimes...it's not. So you don't really want a DSL that is allowing you to manipulate abstract expression trees that will eventually be translated into an efficient binary! So you don't use indirection, you don't use lambdas, you don't use auto memory management, you definitely don't do meta programming! It's bad enough to interface with an opaque model, but at least that is necessary; all the PL features aren't.

Really, you just want C, but some restricted style of C++ is quite similar. Even the extra features Ada gives you don't make enough sense to deviate.

All you need is C, C is all you need

Really, you just want C, but some restricted style of C++ is quite similar.

This is a strong claim, like your earlier one about the supposed irrelevance of PL research w.r.t. safety-critical systems. I believe that both of them are wrong and out of touch with current industrial practices.

There is a long and successful research tradition in PL design for critical control systems. This tradition has led to advances like the fly-by-wire system for the Airbus A380, which is written in the functional synchronous language SCADE. Such a system is not "symbolic" but consists in control laws nested within state machines. The SCADE compiler guarantees that programs are free of dynamic memory allocation, recursion, and unbounded loops. Such guarantees are critical for worst-case execution time analysis. They are also a prerequisite before deeper checks can be performed, as Ben remarked. I do not see how changes in the design of control laws, e.g. the use of machine learning, would drastically alter the situation.

Modern safety-critical systems are complex beasts that must perform a variety of tasks while obeying tight constraints. The development methodology currently used in planes gives good results but is very expensive; applied PL research aims at lowering costs without compromising safety. This goal has already been achieved to a certain extent.

Hey, I'm just trying to

Hey, I'm just trying to explain why these systems can be written in C++ at all. I'm not tacking on universals to any of this, each project is different. But when people say "self driving car must be unsafe because it is written in C++" they are really just talking out of their ass. In reality, a variety of tools and techniques are brought on the problem, and not every safe language or feature is safe in every context.

Domain trumps language in many specialized cases, just because we happen to look at specific domains sometimes doesn't mean we have everything covered. And someone who would want to apply PL to self driving cars would have to do so directly; generalizations from other domains won't work. Saying "it is cultural" is just lazy thinking.

The SCADE compiler guarantees that programs are free of dynamic memory allocation, recursion, and unbounded loops.

But if we aren't allocating memory, recursing, or looping, do we still benefit from this tool? Say the logic is all event driven, signal processing code. What then? Write another tool? And what safety properties are we checking for? We really care about "car doesn't kill kid", not an unsafe memory access or even a race condition, of which there are many ways to avoid or even tolerate.

No solution in PL.

There is no solution for "car doesn't kill kid", because we cannot define kid, except by machine learning, which is probabilistic in nature. The best we can ever get is "car is unlikely to kill kid", and this can only be achieved by testing, IE putting a reasonable kid analogue in front of cars in as many contexts as practical and observing the results. Maths and PL cannot help with this one (except traditional stats to give a confidence value to the observed result).

Right. Incidentally, a lot

Right. Incidentally, a lot of autonomous robotic systems are just based on reflex/action-style architectures. You could build a "safe" DSL around this, but it doesn't really treat any of their actual pain points, where they spend most of their time. The people that build these just aren't having general PL problems, or have moved past them. That is the "market" I guess.

But if we aren't allocating

But if we aren't allocating memory, recursing, or looping, do we still benefit from this tool? Say the logic is all event driven, signal processing code.

The programs written in SCADE mostly fit this description, depending on what you mean by "event-driven". But it is convenient to describe control software using recursive definitions and functional programming constructs. The compiler's job is to translate these programs to static memory C code, or to reject them if it turns out to be impossible.

We really care about "car doesn't kill kid", not an unsafe memory access or even a race condition, of which there are many ways to avoid or even tolerate.

I can assure you that airplane manufacturers and the FAA care a lot about unsafe memory accesses and race conditions.

They care about the plane

They care about the plane not crashing, actually. Everything else is a detail to that cause, they aren't ends. The FAA doesn't mandate how your software is made reliable.

See DO-178/ED-12

Aviation authorities do mandate a lot of the development process for software. The relevant standards for avionics are given in DO-178/ED-12 (the name changes depending on if you are in America or Europe, but the document is the same since it was jointly developed), and it specifies acceptable processes for the whole software lifecycle -- everything from design, to testing, to change management.

One of the things these standards care a lot about is traceability: in general, it is not permitted to use optimizing compilers because they reduce traceability from source to machine code, and code review generally has to be performed on the generated assembly code. (And it has to be performed by a completely separate team of people from the programming team, to boot.)

There's apparently considerable interest in using the CompCert C compiler in avionics software, primarily because it would permit the code review team to review C source code rather than machine code (and to a lesser extent, to give them an optimizing compiler usable in a safety-critical context). However, there's a huge process for "qualifying" tools for use in avionics software, and apparently machine-checked correctness proofs are new enough to the avionics community that it made the process for updating the process pretty complicated.

The CompCert people wrote a neat paper about this: Towards Formally Verified Optimizing Compilation in Flight Control Software.

That is in line with what

That is in line with what I've heard. Abstraction layers aren't very helpful when everything will be verified by hand...so a non optimizing compiler, a language that doesn't need to be compiled or executed in any fancy way.

Self driving cars are in another category from (commercial) aviation: you can't check an opaque learned model by hand, nor could you ever hope to write one by hand! But this isn't very related to PL.

Abstraction layers aren't

Abstraction layers aren't very helpful when everything will be verified by hand...so a non optimizing compiler, a language that doesn't need to be compiled or executed in any fancy way.

It depends on what you call fancy. The certification authorities have "qualified" some compilers at certain optimization levels; examples include custom variants of GCC at -O0 (or so I've heard), or the SCADE6 compiler. As neelk said, such compilers must be able to relate the source and target code to ease code review. But, if I'm not mistaken, the fact that your tool is qualified means that you do not have to certify *all* of the generated code by hand. To be honest, given the amount of assembly code in the final binary, I'm not sure that would be possible at all.

I've been talking to autonomous robotics people...

...and there are a lot of places where PL techniques are both needed and the need for them is felt.

Basically, there is no "guide a robot" machine learning model. Instead, you use machine learning to build dozens or hundreds of smaller models, each of which is learned to identify some simple feature. Then, you use the models to drive small state machine, each of which updates its state in response to signals and emits signals of its own.

These little state machines are all wired up together concurrent-objects style using callbacks and big message buses. Basically, any GUI programmer would both recognize this architecture and shudder in horror, because it's got all the usual concurrency problems of that model.

Fortunately, a huge amount of work on process calculi, session types, and objects/typestate can be applied almost immediately. This is because from the point of view of program design, machine learning doesn't actually change the picture any more than using linear regression to pick parameters does.

...and there are a lot of

...and there are a lot of places where PL techniques are both needed and the need for them is felt.

It is up to us to bite the bullet and work in specific domains, however. How many PL researchers do you know who are working in the autonomous systems domain?

Then, you use the models to drive small state machine, each of which updates its state in response to signals and emits signals of its own.

Right, code integrate signals processed by multiple models at run-time. The models are of course stateless (well, unless you are using an RNN, and even then state management is straight forward so far).

Subsumption architectures work really well in this context. They don't resemble what GUI programmers are used to at all, actually.

Fortunately, a huge amount of work on process calculi, session types, and objects/typestate can be applied almost immediately.

On the code part, maybe. Not from a holistic systems perspective. Or if you want to claim otherwise, please provide a reference to some work in this area.

This is because from the point of view of program design, machine learning doesn't actually change the picture any more than using linear regression to pick parameters does.

For the parts that are still code, PL applies (given PL can accommodate the architecture). But if more of the functionality has been moved into models, PL applies less.

hand verification all the way down

So somebody should be verifying going *up* the stack, so that it can get taller, so we can write in verified Haskell, duh. :-)

Value ranges

I think a system with solid support for static analysis of the range a value may belong to, computation of ranges for derived values, and helping people argue about correctness of range assumptions, could possibly have helped.

Of course that is not what most of the bulk of the work on richer type system is looking at, because doing program analyses of this kind requires fine-grained numerical reasoning, and not everybody likes to work with that. (It's even more delicate and taxing when you work with floating-point values instead of arbitrary precision reals.) There is, however, serious work in these directions as well.

For example, Ada support interval ranges for integer, and has been added various kind of program analysis tools to turn range assumptions from dynamic checks to static validation. See eg. this industrial work on integrating the Alt-Ergo SMT solver inside a SPARK/Ada toolchain.

Value ranges need to be defined to be useful

I think a system with solid support for static analysis of the range a value may belong to, computation of ranges for derived values, and helping people argue about correctness of range assumptions, could possibly have helped.

For Ariane, it seems unlikely. Note that the Ariane flight software was written in Ada, and that some (but not all) variables did have some kind of range protection in place. Quoting from the accident report:
n) During design of the software of the inertial reference system used for Ariane 4 and Ariane 5, a decision was taken that it was not necessary to protect the inertial system computer from being made inoperative by an excessive value of the variable related to the horizontal velocity, a protection which was provided for several other variables of the alignment software. When taking this design decision, it was not analysed or fully understood which values this particular variable might assume when the alignment software was allowed to operate after lift-off.

o) In Ariane 4 flights using the same type of inertial reference system there has been no such failure because the trajectory during the first 40 seconds of flight is such that the particular variable related to horizontal velocity cannot reach, with an adequate operational margin, a value beyond the limit present in the software.

p) Ariane 5 has a high initial acceleration and a trajectory which leads to a build-up of horizontal velocity which is five times more rapid than for Ariane 4. The higher horizontal velocity of Ariane 5 generated, within the 40-second timeframe, the excessive value which caused the inertial system computers to cease operation.
...
r) The specification of the inertial reference system and the tests performed at equipment level did not specifically include the Ariane 5 trajectory data. Consequently the realignment function was not tested under simulated Ariane 5 flight conditions, and the design error was not discovered.

All of which sounds more like a failure of requirements and testing activities than anything that could have been caught be better static analysis (it seems to me that any static analysis that would have caught the problem would have required knowledge of the Ariane 5 flight conditions, which were neglected for all other parts of the design process).

The MCO failure seems even less likely to have been caught by static analysis, since it essentially seems to have involved serialization/deserialization of numeric values without the inclusion of units. The numeric values weren't out of valid ranges, they were just wrong. To know that they were wrong you'd need to know what they were supposed to be, or know that they represented different units than you thought. That requires either (a) end-to-end testing using known trajectories, or (b) a protocol that incorporates units. Apparently neither of those things were done. I'm struggling to see how static analysis or a better type system would have helped. Perhaps some other kind of requirements-level analysis might have helped ("Danger Will Robinson! You've specified incompatible units (or no units)"). But the problem seems to lie in software engineering practices above the level of code.

Hm

Thanks for these quotes, they are very interesting and looking at the details helps.

I would argue that all failures are caused by wrong "software engineering practices" or above (management and decision-making, see the Feynman report on estimating the reliability of the space shuttle), and certainly can be pinpointed to unsatisfactory "requirements and testing activities". Whenever we say "gosh, you failed *because* you did not use analysis tool X", we mean that X would have directly detected and pointed the issue (obviating the need for any quality in requirement and testing activities). But that is the edge case (unfortunately very common due to what I see as cultural issues). In general static verification is there to support the requirement and testing activity, not as an alternative.

With more automated checking, you can afford to transform the description of the requirement and analysis process into a software artifact itself. Wherever you decide to *not* provide value bounds, you will have to place explicit markers "here dynamically assume the correct bound" upon re-entry in the bound-checked parts of the system. And this should make you think and give you pause; you may still make this decision, but a computer-understandable trace will remain in the system to be inspected and re-evaluated later.

Consider for example this part of the quote:

When taking this design decision, it was not analysed or fully understood which values this particular variable might assume when the alignment software was allowed to operate after lift-off.

This sounds a lot like a failure of tooling to me. Why was this not analysed and understood? I claim that better analysis and checking tools could have helped analysing the impact of this decision, and also lower the cost of the safer alternative.

better tools

When taking this design decision, it was not analysed or fully understood which values this particular variable might assume when the alignment software was allowed to operate after lift-off.

This sounds a lot like a failure of tooling to me. Why was this not analysed and understood? I claim that better analysis and checking tools could have helped analysing the impact of this decision, and also lower the cost of the safer alternative.

Perhaps. But in this case I suspect (although I do not know) that the "better analysis tools" would have been tools for performing flight trajectory simulations and exploring the envelope of possible trajectories, rather than anything analyzing code. I suppose that code analysis might have flagged the variable as unconstrained, which could have motivated additional trajectory work. It's hard to say without knowing more about the internal project decision-making process, which apparently even the accident investigators had trouble reconstructing.

Shaky assumptions

I'd assume it is very much like speech recognition, where signals are processed by models, and the code simply routes them (possibly on GPUs) and does schotastic safety checks. A safe language doesn't really provide much help at that point.

These conclusions are resting on total speculation. Reasoning based on random assumptions from non domain experts isn't going to get us very far.

The fact of the matter is that software will always miserably fail in the ways that the tools and programmers hadn't foreseen. That is, if it doesn't fail first by completely foreseeable but blatant human error.

That is absolutely not an argument that completely foreseeable and preventable failure modes for which we already have robust and useful tools should be completely ignored. That's backward progress.

And I'll just repeat and rephrase again: safety critical systems should deploy defense-in-depth. Checks, verifications, practices, and processes at every level to ensure quality. The system will usually fail at the level with the least attention. Attention is hard work! Let's put our tools to good use and make sure they are paying attention even when we aren't.

And for the record I am not so comfortable trusting my life to a C++ programmer's skill at memory management. Given the track record and all.

Speech recognition has an

Speech recognition has an accuracy rate (say 97%), and is going to fail...it is a best effort. Likewise, self driving cars don't accept any signal as being 100% true. Everything is fuzzy, so you make the safest possible decision given the information you have as well as the quality of the information estimated.

And I'll just repeat and rephrase again: safety critical systems should deploy defense-in-depth. Checks, verifications, practices, and processes at every level to ensure quality.

You do that with more signals, redundant signals, redundant signal processing...not a full traditional well-typed symbolic encoding of a solution to the problem. They are too brittle anyways, even if they could work (which they can't).

And for the record I am not so comfortable trusting my life to a C++ programmer's skill at memory management. Given the track record and all.

These programs just don't resemble any that you are used to dealing with. A control system isn't going to involve a lot of dynamic allocation...its a non-issue. We know PL, but PL supports such a narrow view of programming that our knowledge and experience is almost completely irrelevant.

C++ is a programming language

If programming languages are completely irrelevant, then why is C++, a programming language, being used at all?

I'm not saying "Why not stick to assembly?" Assembly language is a programming language too, I would tend to think.

They are programming

Those are programming languages, and those people are doing programming! It is just that our view of programming is not in sync with what they are actually doing. We generally study programming independent of the domain, but domain is very important.

We see "C++" and make automatic assumptions about its safety characteristics. C++ is unsafe, therefore programs written in C++ are unsafe! But that is of course fallacious reasoning.

Static allocation

Most of these programs will use static allocation, allocating a fixed sized array at startup, and never allocating or freeing this memory. I believe I have talked about this usage pattern when discussing the irrelevance of GC to many problems in the past. They will also use stack allocation, but because they use imperative looping, not recursion, max stack depth is finite and bound for the program. It's pretty easy to ensure the stack is big enough, and the network size defines the array sizes.