HOPL III: Evolving a language in and for the real world: C++ 1991-2006

Yet another in the series of draft papers for HOPL-III. This one from Bjarne Stroustrup on Evolving a language in and for the real world: C++ 1991-2006. The paper starts the discussion at the point in time where his 1994 Design and Evolution book ended (which coincides with about the last time I used C++ on a professional basis - meaning I got some insight into what I missed out on for the last dozen years).

The talk outlines the history of the C++ programming language from the early days of its ISO standardization (1991), through the 1998 ISO standard, to the later stages of the C++0x revision of that standard (2007). The emphasis is on the ideals, constraints, programming techniques, and people that shaped the language, rather than the minutiae of language features. Among the major themes are the emergence of generic programming and the STL (the C++ standard library’s algorithms and containers).

Given the period of time covered, generics and the STL are the major highights of the paper. Lots of political discussion (the parts on Sun and Microsoft are mostly obvious, but it's amusing to see Bjarne speak out on the subjects). Much like the Design and Evolution book, this paper is worth a read by anyone interested in PL design, no matter their particular take on C++. Bjarne provides valuable insight on the forces that shape PLs, as well as providing constructive criticism.

Personally I found the discussion on C++0x the most interesting, as it reveals the issues that C++ is trying to overcome as well as the direction the language is headed. I've been tinkering with Boost of late, trying to figure out the FP facilities, but without much luck. Similar to C#3.0 and Java1.7, C++0x is proposing lambdas and a limited form of type inference for variables. But that's a minor addendum, as the paper makes clear that optional GC, concurrency and more thorough libraries are the major aspects to be addressed.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Direct link to the paper.

Direct link to the paper.

Impact on Ada

Reading the paper from an Ada perspective is rather amusing, I can tell you that...

When Stroustrup discusses the impact C++ had on Ada ($9.3.2), he mentions the Booch Components port to Ada95, hardly a specific example of how the language was influenced. A more significant and clearer example is the container library added to Ada2005, which is modeled quite closely on the STL.

Obviously, the STL grew out of work Stepanov did with Ada, so we can live with that...

Also amusing

X is inherently inefficient.

X programming is too complicated to be used by “ordinary programmers”

You'd think X=functional programming... But no, these were objections to object oriented programming! (See $9.3.2)

Everything seems to depend

Everything seems to depend on era and perspective. In his memoirs Patterns of Software Richard Gabriel concedes that C++ programmers were cheaper and C++ was considered easier to maintain than Lisp programs - but this was around 1988 where structural programming still dominates but ran slowly out of fashion.

Slanted to generics

I always find it interesting when PLs get re-purposed somewhere along the line. Reading the paper, one would think that Stroustrap frowns on using C++ for object oriented programming. Since C++ started life as little more than an OO extension, one would think that he'd not de-emphasize those capabilities as much as he has. The fact that languages like Java do OOP easier probably also has a lot to do with what gets attention - that is, any talk of OO in C++ just draws attention to other PLs.

From a PL enthusiast standpoint, the OO facilities are fairly ho-hum, the main area of attraction is with Templates and the STL. Personally, I think that templates are both the best and the worst aspect of C++. It helps open up the language for many interesting possibilities, but it also makes it almost unteachable to beginners (excepting those who enjoy the arcane).

I think his point was not so

I think his point was not so much to de-emphasize the OO features, but to make very clear that C++ is not an OOPL. It *supports* OOP, but it also supports other styles as well. This may be obvious to you and me, but if you've ever looked out into, say, the Internet, perhaps read a flamewar or two, there are a lot of misconceptions about C++ that it seems he is trying to address.

A broader point that I got from the history of C++ is how the philosophy of not making sweeping "Everything is an X" statements payed off, at the expense of "purity", in what he described happening with generic programming.


Apologies if this is considered inflammatory, but it's something I'm honestly curious about.

That is, do people who know what they're talking about agree with my belief that C++ has some neat stuff but is ultimately just a mess?

I have found myself getting sucked in by Boost libraries or techniques where I see how cool it is to be able to do some technique from the functional programming world in C++, only to later realize what a colossal pain it is compared to doing it in a language that supports those techniques directly.

The incomplete C++ grammar in The C++ Programming Language is about 20 pages long. Is a language like this really "designed," or does it just happen? Is Stroustrup's talk about "real world" this and that just an excuse for the language's flaws? In this paper he himself says the language's complexity is "(just barely) manageable."

And now, with the next C++ standard apparently adding so many new features, are there really people who think this is a good idea? Quoting from Josuttis's "The C++ Standard Library": "The valarray classes were not designed very well. In fact, nobody tried to determine whether the final specification worked." It's recommended that people not use valarrays. And yet there are 35 useless pages in this book describing how they work, and some useless code with every C++ standard library implementing them. There are similar issues with autoptr (recommended not to use it), bitset, export (almost impossible to implement and of questionable value), vector[bool] (making it a special case was a mistake; it's recommended not to use it), and so on. And now they're at it again.

On a personal note, after writing code as a hobby for 6 years now and trying unsuccessfully to get paid to do it, I am now in an undergrad Computer Science program. I'm taking the Intro CS course now as a summer class. The class is on C++. We're about a third of the way through the course and so far have covered variables, assignment, arithmetic, and output streams. Lots of focus on stream manipulators. Good grief.

You're not alone

Many people, myself included, believe that C++ is basically an irredeemable mess. Whether a better language could have succeeded in C++'s place is a much more complicated question and one that has to do with a lot more than technical merit, and this is probably what Stroustrup means by "real world." (It's also a question that has launched plenty of flamewars, so I for one would like to avoid going in that direction.) Language adoption is complicated, and I for one don't blame anyone for the fact that C++ is a mess. Perhaps it couldn't have been otherwise. But the bottom line is that C++ is certainly a mess and you're not alone in feeling that way.

It certainly wouldn't have

It certainly wouldn't have been easy to produce a substantially better better C, even if you got to start in the late 80s and put everything together how you wanted instead of getting the start C++ did. Being C as well will nail you every time, and it's a substantial part of how C++ rose to popularity. As if that wasn't enough, C++ tried to bring to the mainstream concepts that were still being refined in research. All things considered, the effort alone is respectable even if I'm glad I don't code in it any more!

Without recapitulating every flamewar of the past 17 years...

...let me just say that the ability to use anything I deem necessary from Boost is a precondition to my being willing to work in C++ again. This carries with it some implications, e.g. being able to use a compiler that Boost supports.

From a social perspective...

...Boost seems to have cropped up as a result of the lack of agreement from the vendors/committee to agree on a standard library beyond the STL (which IIUC only defines 12 containers). Stroustrup's main concern with C++0x is to build up a much larger library - though it'd be way too ambitious to try and match J2EE. Although he mentions Boost a couple of places, I'd think that the committee really ought to embrace Boost.

But then the politics of C++ are probably more intense than practically any other language in widespread use these days. With MS, Sun, IBM, and a whole host of other vendors involved that are directly competing against each other, there's a lack of cohesiveness. Every vendor wants their library to win out. As the paper points out, it's not a problem of having a GUI library but rather having 25 of them.

Beyond whether we like or dislike a particular PL, it's always interesting to see what forces (technical and social) that shape them.

It's not C++, it's the failure of functional languages

I guess you could turn it around and say, given 20 years of a "crappy" language, how come the industry is still using it to such a large degree? I think the main reason is that there has not been a strong systems level language. C++ enables us to program close to the hardware, and this is something that language designer, especially from academia, fail to address.

Take for instance OCaml, which I believe is a great language. However, if you use OCaml you are now (pretty much) forced to use a garbage collector - something that you don't have much control over. For systems work, you have to have control.

The main reason for C++'s success is that it has a fairly one-to-one mapping of language constructs to the underlying assembly language and the memory it uses (there are some exceptions, of course). In C++, I can declare the exact layout of my structs (records), down to bit level - you can't do that in OCaml. I can use a 'vector float', which pretty much corresponds to a SIMD vector register. When I write 'int i;' and then later 'i++', I know that those statements will be compiled to 'addi i, i, 1'. I can even put in a little inline assembly, if I want. With templates, I know that the resulting code will not be wrapped in some sort of mysterious structure, instead it will be inlined directly into my source.

Of course, I still hate C++. It is just that there are no alternatives.



But you might find Felix or BitC interesting.

Not done yet

Both Felix and BitC are interesting, but they are both still very experimental. I am following their developments with interest.

languages with templates

What other languages have a metaprogramming feature like C++'s templates, and can support something like the STL?

I believe you can go a long

I believe you can go a long way implementing something STL-like with a sufficiently powerful ML-style module system. Personally I want my metaprogramming features as much unlike C++'s templates as possible!

It's Ada. Ada 2005!

It's Ada. Ada 2005!
I personally think that everything written in C++ should be rewritten in Ada 2005. Kernels, kernel modules, applications — everything.

This language revision took 2 years to complete. Ada 2005 is the thing you might probably want when talking about C++ 0x. I can't see any advantage in repeating this feat for C++. Instead, switch to Ada. Neither experimental nor half-baken. The most trusted name in software.

I've looked at the proposals for C++ 0x. Ada 2005 will remain ahead of C++ 0x. Generally, because of ballast of compatibility, features of Ada postponed until C++ 1x. Too much to wait for me. Instead, I prefer binding legacy code and use it in Ada. Now.


I may be ignorant but I never got realy why its so important to know exactly what code is generated by the compiler. Is it just performance? is it operability with other lowlevel stuff or what?

That is not an ignorant

That is not an ignorant question, that is a very good one! Yes, performance is one of this issues - perhaps the biggest reason. But there's more - control to hardware resources is another, and memory management is another. C/C++ is doing well in games and in the embedded world, where resources are scarce and performance critical.

However, what is interesting is that C/C++ is doing remarkably well also in other contexts, such as desktop GUIs (Gnome/KDE) and the like. Why is this?

I think the reason is that when you are working on a big system, there are still parts of the system that needs to be high performance and/or talk to low-level systems. Now, you could argue that you should split the system into your easily identifiable systems components (written in C/C++/assembly) and that the higher level "stuff" should be written in another higher-level language.

The problem is that this is not that easy in practice. When you do this, you create a barrier - and communication through this barrier is a major hazzle, and in some cases infeasible. An example is access to SIMD registers. Another is simple things like CPU performance counters. If these things need to go through a FFI (foreign function interface), you're losing too much. Another problem is that it just takes a lot of time and effort to maintain an FFI.

So, the cost-effective solution is to use C++ where it isn't really that suited. Pragmatism wins.

Stretching C++

The paper emphasizes that C++ is geared towards systems programming, but explains that this leads to the use of C++ in areas where there are much better languages (though the author would probably not agree with my assessment of the particular PLs or where the lines are drawn).

If it was easy and cheap to switch back and forth among applications languages and general-purpose languages, we’d have more of a choice.

Personally, I prefer languages that play well together (e.g. Lua and C). But the bottom line is that the market has a bias towards the giant languages that can do everything. Anyhow, the paper should not be seen as either an affirmation or rejection of C++ on a technical basis. It explains some of the forces that shaped C++, acknowledging many problems that were picked up along the way.

Ironically I never used C++

Ironically I never used C++ for systems level programming and I don't know anyone ( due to the particular circumstances of my work ) who used it for such. Lot's of application frameworks instead that cried more for flexibility than for speed and could have been written in Ruby or TCL.

About pragmatism in the so called "real world": first the platform and language is fixed by the management. This decision preceedes any other. Than we might go to meetings discussing the problems we try to solve.

It's all about the cost

But the bottom line is that the market has a bias towards the giant languages that can do everything.

Using many different languages in the same project increases the cost exponentially with the number of languages being used. From a software house's perspective, it's better to just focus on one language (per project) that can do most of the stuff your customers want, so you can quickly get productive.

Of course, I still hate C++.

Of course, I still hate C++. It is just that there are no alternatives.

Sure there are. All of the above is available in Cyclone, and it's a strongly typed, safe language, unlike C++. Cyclone is the only safe, expressive language I'm currently aware of that permits manual memory management (expressive in the sense of algebraic data types, existential packages, etc., unlike CCured). I'd love to hear about any others!


Cyclone is about 2 times slower than C/C++.

I know Cyclone has some neat

I know Cyclone has some neat things C++ doesn't have, but wouldn't Cyclone be more of a safe alternative to C? It's missing the ++.

Unless I missed a feature when I was reading about Cyclone a while ago, I don't think you can say it permits "manual" memory management. It does not require garbage collection where the type system guarantees there will not be a dangling pointer. Granted, it has a lot of neat ways to do this, but if your goal is to do manual memory management, the gap between what is actually safe and what a type-checker can modularly prove safe is going to be a problem. The overall question I had about Cyclone was: what programs have requirements such that they can afford GC, "fat" pointers, etc, but they cannot just use a higher-level language?

Unless I missed a feature

Unless I missed a feature when I was reading about Cyclone a while ago, I don't think you can say it permits "manual" memory management.

I know what you mean, but the front page says it supports manual memory management, so it's clearly considered to be a feature. :-)

The overall question I had about Cyclone was: what programs have requirements such that they can afford GC, "fat" pointers, etc, but they cannot just use a higher-level language?

Control freaks perhaps? Seriously though, I believe there are still some low-level issues which high level languages still have to escape to C to handle. GC itself for example.

There are lots of applications that need control.

Scientific & military applications are two domains that the code needs to be as fast as possible, loops to have accurate timings, etc. But in those types of applications, there are sub-parts of it that need garbage collection. Those parts are the ones that handle databases, the GUI and other 'trivial' stuff.

Here is an example from my line of work: a simulator for a training system. The program consists of a soft real-time part where a simulation loop is executed every 100 ms. The simulation exchanges messages over TCP and UDP. The memory mapping capabilities of C++ are great in this area: byte buffers are converted to message structures by a simple pointer cast; the program does not need to spend resources (processing time and memory) for allocation of message objects and transfer of byte buffer contents to those messages. Message structures are allocated on the stack. This part does need every bit of control possible in order to be as fast and accurate as possible.

The application also contains a huge graphical environment for editing scenarios. In this environment, there are many windows simultaneously open where the user can preview and edit various aspects of the simulation: the terrain, the aerial and ground vehicles, armament, etc. This part of the application communicates with a database. This part does not need accuracy, but it contains a very complex object model, a huge object-to-relational mapping library, many graphical editing functions (plotting, drag-n-drop, etc) and it could hugely benefit from garbage collection.

The two parts could not easily exist as separate applications, because the simulator shares many functionalities with the editor. The operator can affect the simulator in real time through a graphical GUI, in a way similar to the editing process. And the editing part needs the simulation in order to accurately show trajectories and discover events while preparing scenarios. There are quite a lot of modules shared between the two parts.

The only logical choice for this type of project? C++. Java and other GC'd languages are not appropriate, because if the collector kicks in in the middle of the 100 ms loop, the simulation will probably freeze, with 'catastrophic' impact on the co-operating devices (cameras, radars, etc). But if C++ had optional GC, the development of the editing part would take less than 1/4 of the time it actually took.

Why not HLL and FFI?

I wonder, what in this application prevented doing one part in, say, Cyclone, or other low-level language, and the other in, say, Haskell?

The cultural pessimist in me thinks...

the gap between what is actually safe and what a type-checker can modularly prove safe is going to be a problem.

Actually, I think the real problem is a slightly different one: the gap between what is actually safe and what programmers believe is safe. AFAICS, practically no C code out there in the wild is truly memory safe, no matter what developers think. So you won't make a lot of people happy with a language that enforces such safety.

Then we have a problem,

Then we have a problem, since most "safe" languages have garbage collectors written in C/C++ and are unsafe by proxy.

Indeed we have

I can say for sure that the VM I'm most familiar with (Alice ML) has no shortage of potential memory corruptions. Every now and then we fix one. Most of them have to do with memory management.

But at least when your ML program crashes you know it's not your fault...

Maybe someday...

...we'll be able to extract a garbage collector from something like this to a language suitable for use in developing a low-level runtime system, e.g. BitC.

Actually, C++ fails in this regard as well

What is the bit-layout of a class? What memory is touched in calling a virtual function? This is a non-trivial question, if you touch swappable memory from interrupt context, even a virtual function table, you're going to die horribly.

This is what C excels at. Or that subset of C++ which just looks amazing like C. The higher level structures of C++ violate these concepts just like Ocaml does.

And so what? If you're banging directly on hardware, use C. If you're not banging directly on hardware, why do you care what the exact bit layout of a datastructure is, so long as it works?

This, I think, is the greatest sin of C++: trying to be all things to all people. Which is impossible: there are no golden hammers.

If you are writing ISRs

then you had better make sure that any data structures you plan to use, as well as your code, is in non-swappable memory. The ways you ensure that are well beyond the language spec, and highly platform-specific. C++ vtables are merely one more thing that ought not be swappable if used from interrupt context--and since they generally go into the same place as other immutable data (such as jumptables for complex switch statements, string literals, filescope or function-static variables which are declared const and not optimized away, etc.)--how does C++ meaningfully differ from C in this regard?

C doesn't have vtables

With the exception of the jumptables for complex switch statements (which are generally stored inline in the code), all the other examples you list are explicitly declared by the programmer. How you declare them non-swappable is non-standard and changes from compiler to compiler, but the fact that there is something which either needs to be not touched or declared not swappable. Last time I looked, I didn't think that you were even gaurenteed that a per-class vtable exists, that a C++ compiler was free to use other implementations if it desired to. How virtual functions behaved was specified, but not how they were implemented.

The solution, of course, is simply not to use virtual functions in these situations. And don't use classes when you need to know the layout of a structure. And don't use exceptions, templates, RTTI, or other questionable constructs.

In other words, use that subset of C++ which just looks amazingly like C.

If you are really writing ISRs

then you ought to know how the constructs of your language map to the machine in question. Likewise for other on-the-metal systems programming. On the targets I've done systems programming for (in C++, including the writing of ISRs--ISRs which do invoke virtual functions among other thins), vtables go in the read-only-data section, just like the other things I mentioned. On what target that you're aware of, are vtables treated differently than other read-only data?

And while they layout of classes is implementation-defined according to the C++ spec, on every compiler I've messed with it's unsurprising. It can get a bit weird if you use multiple inheritance; otherwise it's straightforward.

Exceptions are probably a bad idea in in ISR--especially propogating an exception out of an ISR. I see no particular reason to avoid templates--providing, again, that you know and understand your target.

Complaints that one may not write generally portable ISRs in C++ strike me as a bit oxymoronic--if you're in the business of writing ISRs (or garbage collectors, or numerous other things), you've already forsaken a great deal of portability.

How does a C program make

How does a C program make sure it does not take a page fault during an ISR? It is not specified by the C standard. There is a certain amount of tool/compiler/OS/arch specific code that gets figured out once and then abstracted. In the code I work on, you have to a-priori manually pin down functions that get called by ISRs. Cannot a class's vtable be done in the same way? (As for RTTI, I personally would do without :)

Indeed, new concerns emerge if you use C++ in a kernel, but they are no worse than, for example, the challenge of maintaining for *all* pointers to memory whether the allocation was made from a paged or non-paged pool. I guess I am trying to point out the distinction between "would require extra attention" vs. "cannot because of a fundamental design issue".

The following way:

1) Don't use malloc/calloc to allocate data structures. OSs where this is possible generally provide other allocation routines, where you pass in a flag to say if you want swappable or non-swappable memory (and often other constraints as well).

2) Be carefull which OS supplied functions you call. Generally the OS will list what functions are safe to call from ISRs.

3) You need to mark non-swappable functions in some compiler-specific way as non-swappable- usually, you mark it as being in a special code segment.

Note that all the objects are marked at allocation as non-swappable. When is a vtable marked as non-swappable? When the class is generated? When the object is allocated?


Exactly, you identify what needs to be non-swappable and you tell your compiler/OS/tools. So the answer to the question "When can a class' member functions be called polymorphically from an ISR?" is the same as the answer to "When can a function be called from an ISR?": "When I tell my compiler/OS/tools so".

As a related example, consider how polymorphic classes, exception handling, etc can be used with dynamically linked libraries. Doing so obviously requires addressing some of the same issues. There are even cross-platform ABIs, like the one GCC uses (http://www.codesourcery.com/cxx-abi/).

Although I like C++, I have

Although I like C++, I have to agree with your constructive criticism.

Here is my own example of C++ "craziness" regarding C++0x: sequence constructors. The new standard will define the special concept of a 'sequence constructor', along with special constructor syntax...instead of providing the concept of the tuple and treating initializer lists as tuples.

I think there is room for a programming language that provides pointers and low-level constructs from one side while providing functional and compile-time programming on the other. C++ has evolved towards that, but its evolutionary nature shows through its ugliness.

You might also take a look

You might also take a look at D if you have not done it yet.

D follows the same logic as C++

D is a language that lacks elegance, just C++ does. It corrects some problems of C++, but it in the end it is just as ugly. Walter Bright is an extremely bright fellow (no pun intended), but D is not what I want from a C++ successor.

Isn't "tuple" usually

Isn't "tuple" usually associated with a fixed number of elements of heterogeneous types?

The "special syntax" actually does take a new standard type (std::initializer_list) which is just a pair of pointers to T.

It could have been done in another way.

For me, every declaration in a language must have a literal form. In C++, functions and structures do not have a literal form.

The new standard could have been like this: every expression in the form of "{ comma-separated expressions }" is an anonymous struct literal, just like anonymous functions in other languages.

The properties and members of an anonymous struct could be accessed at compile-time by template pseudo-functions. For example:

template <class T> void print_anonymous_struct(T a) {
    for(int i = 0; i < lengthof<T>(a); ++i) {
        cout << memberof<T>(a, i);

The compile-time introspection of anonymous structs would be used on named structs, because for the compiler, there would be no difference (apart from the latter having a name). Compile-time introspection could be used for doing exact garbage collection, because it would be possible to write one function which marks the member pointers of objects. For example:

template <class T> void mark_member(T a) {
    //empty; no mark for non-pointer members

template <class T> void mark_member(T* a) {

template <class T> void mark_struct(T a) {
    for(int i = 0; i < lengthof<T>(a); ++i) {
        mark_member(memberof<T>(a, i));

Compile-time introspection of members of structs is a very useful concept. It can also be used for other tasks...for example, introducing run-time introspection.

Compile time introspection

Compile time introspection certainly would be nice to have :)

As for using it for initializer lists, consider that all constructors taking initializer lists would have to be templates. Among other things, this would be a problem for ABIs and dynamic libraries. This and a billion other things you would have never thought affected this simple language extension in the most recent WG21 paper on the subject: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2215.pdf

Among other things, this

Among other things, this would be a problem for ABIs and dynamic libraries.

Templates is the most important feature of C++ and almost everything done in C++ involves templates...so the argument about ABIs and dynamic and libraries is very weak in the C++ world.

This and a billion other things you would have never thought affected this simple language extension in the most recent WG21 paper on the subject: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2215.pdf

I've read that paper, and I haven't really found anything that could go against having struct literals. Perhaps you care to elaborate?

Struct literals are more powerful than initializer lists, because initializer lists are of a single type. Struct literals could open the way to declarative C++ programming.

Dynamic link libraries and

Dynamic link libraries and ABIs are very important for many people in the C++ community. Although terribly useful, templates generally have the problem of being exported across module boundaries. Requiring their use for a C++ feature excludes a portion of the community.

I agree with you that more general solutions to the problem exist. I believe the goal, however, was to make a minor change that supported C++'s general design goal of giving the same support to UDTs as builtins. Ushering in a new paradigm for C++ programmers was perhaps a bit out of scope :)

But struct literals is exactly about that.

Struct literals is exactly about that, i.e. giving support to UDTs as builtins. Just like I write the number '10' and the string 'aaa', I should be able to write {1, "a", 3.14} and be able to handle that like a builtin type.

The solution I presented does not exclude the possibility of defining monomorphic struct literals, i.e. struct literals where each member is of the same type.

Finally, templates could be avoided by using specific instantiations of a template type (for example, 'tuple<int, string, double>') (you can't do that now without explicitely instantiating a tuple type-the point of this is to do it without having to type 'tuple<int, string, double>(1, "a", 3.14)').


Recall that the goal of the language extension you are criticizing is initializing -lists-. We want to be able to write:

vector<int> v1 = {1}, v2 = {1,2,3,4,5,6,7,8,9};

The "specific instantiations" you mentioned wouldn't work... unless you introduce a new overload for... I shudder to go on ;-) I forgot to mention above, but even with the struct literal and templated constructor, you would be instantiating and generating code for each different list length.

(you can't do that now without explicitely instantiating a tuple type-the point of this is to do it without having to type 'tuple(1, "a", 3.14)')

See make_tuple, also in the Boost.Tuples library.

The tuples feature are a superset of lists feature.

Recall that the goal of the language extension you are criticizing is initializing -lists-. We want to be able to write:

vector v1 = {1}, v2 = {1,2,3,4,5,6,7,8,9};

Tuples give you that, and more:

class Foo {
    template <class T> Foo(T &t) {

Foo f = {1, "a", 3.14};
The "specific instantiations" you mentioned wouldn't work... unless you introduce a new overload for... I shudder to go on

But you have specializations of initializer_list anyway.

I forgot to mention above, but even with the struct literal and templated constructor, you would be instantiating and generating code for each different list length.

Exactly. When your list is not homogeneous, that's the desired behaviour.

See make_tuple, also in the Boost.Tuples library.

The point of this is to do it without having to type "TUPLE(1, "a", 3.14)", where TUPLE can be "tuple" or "make_tuple".

Exactly. When your list is

Exactly. When your list is not homogeneous, that's the desired behaviour.

And when your list is homogeneous and you have 20 copies of the exact same code, it is a little silly. (Maybe only 17, the instantiations for 1 and 2 elements might pass the members in registers and 0 could skip the memcpy() altogether :)

The 'sequence constructor' feature has a small goal and a small solution. The fact that generalizations _exist_ does not mean they are practical for the entire community (in lieu of unnecessary code duplication and separate compilation restrictions) and it _hardly_ makes sequence constructors an example of "C++ craziness", as you stated in the root of this discussion. That is my point, not that compile-time introspection has no place in the language. In fact, I think it sounds great, all it needs is someone passionate enough about it to champion a proposal to the committee... (you? ;-)