Programming -- Principles and Practice Using C++

I just noticed Stroustrup is about to publish this introductory book.

I am stunned on many levels, so I will keep my comments short.

In general, this seems like HTDP for C++, that is it is not a comprehensive text about C++ or about CS in general, but rather one aimed at teaching the basics of software construction. This is a good idea, and many have written books with similar goals in the past, of course.

I wonder what are the chances that any university not employing Stroustrup will switch to C++ for their introductory course (if they are not using it already). It seems to me everyone is teaching Java...

My second observation is that a large fraction of the book is devoted to STL. Which a good thing on many levels. Some of the topics may even be explained functionally.

My third observation is that even given the intended audience and goals the ToC seems really sparse. I wonder if that's all the book is going to contain.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The absolutely very basics of software construction

Important topics are not present in this book. Of course, I can only judge it by its contents and the introduction. But BS seems to ignore many of the problems we C++ developers face every day in programming and that are likely to be met by students as well.

Such as?

In your opinion, which topics that aren't covered are important when learning to develop software?
He clearly says in the note to teachers that it's only a part of what would be covered in a university course.

The most basic stuff...

i.e....memory management.

Part 17 is 'vectors : memory management', but one frequent problem is storing references to objects in multiple places (maps, vectors, structures etc), without a clear indication of which object has the ownership of memory.

value semantics

Recall this is a book for beginners. Pointers and dynamic memory management are definitely not beginner techniques. If performance is not a big concern, value semantics can take you incredibly far without such complications. Consider how much horrifically complex pointer manipulation (from the point of view of a new programmer) is hid by:

string s;
cin >> s;
cout << "hello " << s << '\n';

Things like STL iterator invalidation does cause this problem to surface earlier than you would hope, but even those rules can be stated abstractly without reference to low-level machine details.

Amen

Mutable linear state is far easier to deal with than mutable aliased state.

It is, but...

...in real life, it's not always practical to have only linear state.

... its also not always

... its also not always practical to avoid gotos, yet we teach structured programming.

The needs are not the same.

But in real life, needing gotos is rare, non-linear mutable state is needed very often.

On the side,

Prepending "But in real life," to every comment does not automatically improve credibility and may not even be considered polite.

Sharing of ownership is useless

There is no need for sharing of ownership, and much less for manual memory management.
In a sequential program, there is absolutely no reason why value semantics, unique ownership and simple "weak" references cannot be applied to solve all problems.
This is actually the recommended way to code in C++. It's not difficult at all, way easier than programming with a pure functional language for example, where nothing is mutable. It just requires thinking about ownership and lifetimes.

In a multithreaded-environment, refcounting and the like can have their uses against a complicated locking policy to maintain objects alive. This can be avoided altogether by using a more high-level concurrency model.

The only reason to use sharing is as a simple optimization to reduce memory-usage, which is pertinent to apply in two cases:
- avoiding having multiple instances of the same resource at the same time; being able to reuse previously loaded resources. (e.g. font loader, etc.)
- partial sharing of node-based data structures.
The first case can be implemented generically, it's Boost.Flyweight.
The second case is container dependent, but everything is obviously abstracted to the user of the container.
Both of these should work with constant data, while the partial sharing of node-based data structures could eventually use COW.

Your recommendations are not realistic.

I am certainly not making any application where heaps of data are copied over and over. When I want to process the list of customers (for example), I will process the list of customers, not a copy of the customer objects. Each customer will have a unique object in memory representing the customer. I am not going to copy all the details of customer (first name, last name, address, etc) over and over. It will kill my application's performance as well as make my life difficult (for example, if I want to change a customer's name, which instance of the customer do I change etc).

Why copy?

Why would you copy something if it semantically doesn't make sense to do so or is not needed?
Using value semantics doesn't mean you need copy everything everytime you want to do something.
You may have to move (a well-designed program should construct the objects in the right place and not require this, but this is useful for some programming patterns), but certainly not copy.

What is the use of teaching something not found in real life?

What's the point of teaching programmers something that they will not find in real life? programmers in classroom will be taught to manage the lifetime of objects using vectors, but when they go out on a job the situation is much more complex and these people will be without knowledge of what they should do.

The goal structured resource management

You would be right if vectors were some magical part of the language and in cases where they didn't apply, the programmer was back at square 1 with new and delete. But that is not the case. Standard containers are merely instances of the more general RAII idiom.

By starting beginners in this mindset, when they do run into "real world" scenarios that aren't as simple as a std::vector, they will understand how to structure their solution in a manner that provides similar simplifications of resource management. In fact, when memory management is eventually introduced in the later chapters of the book, one of the examples is how to start implementing std::vector.

On a final point, I am sure you would agree Adobe Photoshop is a very real and practical program. They use C++. Care to guess what container they report gets the majority of usage and they recommend as the default? std::vector.

That's only because STL containers are value-based.

But in real life, most applications need reference-based containers, because objects are not stored in one place only.

That's good for Adobe that they can use std::vector so much. Perhaps Photoshop's objects are simple enough to be stored in a vector. But that only means that Photoshop's memory management requirements are simpler than of other programs, it does not prove anything.

Can I please have a link to this report?

Yes they are

But in real life, most applications need reference-based containers

No one is arguing that programs should only use value semantics. When you can't, you can't. But often you can, and when you can, your program is simpler for it.

If we can get back to the subject of the thread, I agree with Dr. Stroustrup's choice that, when teaching a beginner to program, value semantics are a good starting point. I'll reiterate that near the end, he teaches the user how to implement std::vector. I believe this is a key perspective on using complex low-level features: you use them to build higher-level abstractions, and then you work at that higher level of abstraction.

There is no report. Either Sean Parent or Chris Cox (or both), gave a talk about the extreme things they have to do in Photoshop to handle TB-sized images and the like. The vector comment came up when they were explaining how important contiguous element layouts were to performance.

But that only means that Photoshop's memory management requirements are simpler than of other programs,

Oh, is that what it means? I wish a Photoshop programmer was here...

No one is arguing that

No one is arguing that programs should only use value semantics. When you can't, you can't. But often you can, and when you can, your program is simpler for it.

Value semantics in languages with mutable state is only useful for trivial programs.

Stroustrup does not teach reference semantics, so his book is pretty much useless (in my opinion). The beginner programmer will get nothing out of it. In fact, he will get confused: as soon as he enters the workplace, what he has learned from this book will be useless, when he will be confronted with complex patterns of reference-based semantics.

If we can get back to the subject of the thread, I agree with Dr. Stroustrup's choice that, when teaching a beginner to program, value semantics are a good starting point. I'll reiterate that near the end, he teaches the user how to implement std::vector. I believe this is a key perspective on using complex low-level features: you use them to build higher-level abstractions, and then you work at that higher level of abstraction.

But he still avoids the need to share data. His vector implementation still maintains the same value semantics, even if the internal vector data are allocated in the heap.

There is no report. Either Sean Parent or Chris Cox (or both), gave a talk about the extreme things they have to do in Photoshop to handle TB-sized images and the like. The vector comment came up when they were explaining how important contiguous element layouts were to performance.

So the reference to Photoshop is not about how vector is the most useful data structure, it's about how arrays help performance, which is an entirely different thing.

Oh, is that what it means? I wish a Photoshop programmer was here...

Why can't they be simpler? it's not that Photoshop has a huge complex schema or something. It certainly has strict requirements for performance, but it does not have to have a big set of heterogeneous data (like an enterprise application, for example).

Value and reference semantics are not mutually exclusive

Value and reference semantics are not mutually exclusive in a program. Thus statements like:

Value semantics in languages with mutable state is only useful for trivial programs.

don't make sense.

Along the same lines, a program that uses a lot of standard containers certainly does not *exclusively* use standard containers, hence guesses about its internal memory complexity are at best uninformed.

Depends on the thing being passed around

This is basic stuff.

Immutable data (raw numbers, strings, atoms, and composites thereof; which are not in correspondence with some external entity) can be handled with value or reference semantics, as appopriate--a copy is functionally equivalent to a reference. The decision to copy or alias is therefore an optimization question--if it's a small object, or you are migrating the datum from one process to another--copying is probably better. If it's a large object in the same memory space, passing a reference is better. The difference is only in the underyling operational semantics; not in the observable results of the program.

Things which are in correspondence with external entities, or are otherwise stateful objects with identity--should generally get reference semantics, unless you explicitly want to make a separate entity with the same initial state. Sharing stateful things across process boundaries can get hairy; lots of software companies make lots of money trying to solve that problem in a reasonable manner.

C++ containers, like vector, can do either with ease--if you want reference semantics, simply contain pointers (or objects encapsulating pointers). This observation is modulo memory management, of course--the weakness of C++ is that it may not be obvious when to reclaim an object which is semantic garbage.

Not just C++

the weakness of C++ is that it may not be obvious when to reclaim an object which is semantic garbage

Assuming the perspective that, once acquired, resources have a programmer-intended lifetime and it is the responsibility of the rest of the program to enforce and respect this intended lifetime, even GC'ed languages have the same problem, as witnessed by "leak finding" tools like Cork.

Re: The most basic stuff...

"frequent problem is storing references to objects in multiple places (maps, vectors, structures etc)"

I'll assume you meant storing pointers because references can not be stored in STL containers. And how do you know he is not addressing memory ownership concern?

Pointers and references.

By 'references', I mean both pointers and references (which are pointers anyway).

References can not be stored in STL containers, but values with references can.

The only thing I know, as I stated before, is from the contents and from the introductory text. The contents have only one reference to memory management, under vectors.

But it does not matter: even with vectors, the problem of memory management still holds. Having to deal with pointers it is exactly the same as having to deal with indexes in a vector: pointers are indexes of memory anyway.

For example, one of the problems the beginner will face with vectors is how to manage indexes. Which index points to which object? and if an object is removed from the middle of the container, indexes higher than the removed item will be invalidated, which is a big surprise for the beginner. At least with real pointers there would not be any problem like that...

Almost

First, I'll assume by "index" (an integer) you mean "iterator" (like a pointer).

Second, "memory management" means manual allocation and deallocation of memory. You are describing the iterator invalidation rules for std::vector. These exist, to a degree, with any container/iterator.

Third, the stringent invalidation rules of vector allow the obvious, performant implementation. If that isn't good enough, it is easy to make, and any large project probably already has, a std::vector variant with "smart" iterators. It's just more expensive.

First, I'll assume by

First, I'll assume by "index" (an integer) you mean "iterator" (like a pointer).

I meant an index (an arithmetic value used for accessing an element of an array), but their semantics are identical with iterators in the case of vectors, so yes.

Second, "memory management" means manual allocation and deallocation of memory. You are describing the iterator invalidation rules for std::vector. These exist, to a degree, with any container/iterator.

They don't exist for list, map and set. It's really strange, for a beginner, to remove an item from a vector and have some of the rest of the world be invalid.

Third, the stringent invalidation rules of vector allow the obvious, performant implementation. If that isn't good enough, it is easy to make, and any large project probably already has, a std::vector variant with "smart" iterators. It's just more expensive.

I don't see random access as the primary reason for selecting vectors as the type of containers to use in a project. Most cases require performance in adding and removing elements from the middle of sequences...examples: creating a new document (the new document is placed in a list), closing a document, creating a new item in a list or tree, etc.

The vector constraints can be dangerous. For example, let's say I have a vector of elements and I want to remove some of them. I make a loop to get the iterators of the elements needed to be removed, then I present a dialog to the user to select which elements to remove. The user selects the first item to remove. By the time the first element is removed, the rest of the elements' positions have been shifted, and thus the rest of my iterators are invalid. The user will eventually delete other elements from those selected.

It's cases like the above that I would like mr Stroustrup to address...problems that are found very frequently in programming. The vector class is nice, but not really suitable for most cases of data management. In fact, they only time a vector is better than a list is when random access performance is the most important factor.

A vector is a vector, don't expect it to be something else.

Certain operations invalidate all iterators. That's just how std::vector works, and this is clearly documented on a per-member function basis in any reference documentation.
If you want a way to reference your objects without invalidation, don't use std::vector but std::vector. Of course that's not the same memory layout, since you have an array of pointers, not an array of values.

A vector simply aims at being a growable contiguous block of objects. If it's not the data structure you want, just use another (by building it on top of it, preferably).

It's about the poor beginners, not me

I've been burned with C++, so this book is not about me. But I really find it horrible that this book is presented as good material for beginner programmers. It's an absolutely horrible approach that will left beginners puzzled as to how to tackle real problems. It's just happened that std::vector was presented by this book as the solution to memory management.

Flame much?

Flame much?

If I could steer the

If I could steer the conversation away from "std::vector isn't the perfect container for all purposes"... Yes, clearly.

I believe the intention of your comment, however, is that "std::vector is not an appropriate container for a beginner C++ book". I do not think the argument about iterator invalidation is strong enough, especially since indexes (integers) do not have the invalidation problems and can often used instead.

but their semantics are identical with iterators in the case of vectors, so yes

It takes a pretty big stretch of the imagination to call an integer and a vector iterator semantically identical.

They don't exist for list, map and set

They do [in any language]. What state is an iterator in after deleting its element? Valid-on-next? Valid-on-previous? Invalid? Invalid-until-next-increment?

They do [in any language]?

They do [in any language]. What state is an iterator in after deleting its element? Valid-on-next? Valid-on-previous? Invalid? Invalid-until-next-increment?

Not sure what you mean by "any language" here. For functional data structures (including functional lists, maps or sets), it will simply remain valid, without constraints.

But that's not what the STL provides, of course.

Ooops, yes, I should have

Oops, yes, I should have said "imperative language" and specified the delete as a mutating operation on the container.

It takes a pretty big

It takes a pretty big stretch of the imagination to call an integer and a vector iterator semantically identical.

In the case of vector, they are semantically identical. Check this program out:

vector data;
int index = 3;
data[index] = 5;
vector::iterator it = data.begin() + 3;
*it = 5;

Let's say I remove element 2:

data.remove(it);

Now both index and it point to another element.

They do [in any language]. What state is an iterator in after deleting its element? Valid-on-next? Valid-on-previous? Invalid? Invalid-until-next-increment?

I wasn't talking about the iterator erased, but the rest of iterators that might exist in a program in the same collection. For list, map and set, other iterators are not invalidated when an iterator is erased.

This is pretty important for anyone using vectors, let alone beginners.

Your example demonstrates the opposite

Actually I was thinking of an insert example that demonstrated their inequivalence but your example will do. After the vector::erase (which I assume is what you meant by 'remove'), 'it' is invalid. If you compile in GCC with -D_GLIBCXX_DEBUG you'll even get a nice debug assertion. So, not semantically equivalent.

I wasn't talking about the iterator erased

Ah, but you have to, and so even if the iterator invalidation policy for list, map and set are simple, they do exist, refuting your claim:

They don't exist for list, map and set

Well, I say simple, but there is still even leeway for what example happens to an iterator after the element under it is deleted. The easiest is to say that it cannot be used until assigned a new value, but for certain cases (I can think of a recent case in Mozilla), you want ++ to move the invalid iterator to the next element after the deleted one (or the end if there were none).

Introduction and Teaching a profession

I wouldn't expect to be able to suddenly start making a living as a biologist by simply reading an introductory text on biology.

Of course, that's the problem with using C++ as an introductory language. It's not hard to get off tangent by having to discuss the myriad intricacies of the language while straying from the more general concepts that are being taught.

Sparsity of Stroustrup's ToCs

Ehud comments on the sparsity of the table of contents. In my opinion, this is not very informative when it comes to Stroustrup's books: TC++PL TOC is very concise, yet the book is choke full of information.

focus on STL

I agree this is a good thing, the ideas behind the STL were what first got me interested in FP. Encouraging new programmers to think functionally when using a mainstream language is great, FP doesn't have to be an isolated part of the curriculum, some of its benefits are immediately useful in non-FP contexts.

Java


I wonder what are the chances that any university not employing Stroustrup will switch to C++ for their introductory course (if they are not using it already). It seems to me everyone is teaching Java...

With criticisms of the Java-only approach becoming more common and mainstream (for example), perhaps more schools would consider adding more time with a systems programming language to their curriculum. Not that that would necessarily involve C++, but at least less Java.

Java Schools

The school I went to ( http://www.ncat.edu/ ) (1) had been an Ada school until about a year before I enrolled, (2) tried switching from a C++ to Java school in my third year or so, and (3) last I heard the Java migration failed pretty badly when the students got to higher levels, so the plan was to go back to being a C++ school.

fixation on PLs in education

why all the concern about programming language choice in introductory classes? the goal of an introductory "programming" course should be to teach the basics (statements, variables, functions, arguments, return values, flow control, looping constructs, input/output, data structures, recursion, etc). the choice of language should be based on whatever is least distracting/intrusive to teaching the above. IMHO, that's python. my second choice would be pascal (which dates me). C/C++? too "systems" centric except for someone who's already familiar with hardware/systems (ie EE student). java? too OO (ie have to introduce objects just to write "hello, world!"). but i would choose java over C/C++ to teach programming fundamentals (which was my introductory language in college, after having pascal in high school, and i've used C++ extensively for over a decade professionally).

and for your example, i find it funny that the ada folks are complaining about java, as java is to some degree the new ada. yeah, java is not as strictly typed as ada, and is forcibly more OO than ada, but both are managed run-times and not "systems" PLs. and just as ada eventually got ravenscar, java eventually got rtsj. ada is more similar to java than C/C++ (which are prefered in the article over java).

i work in an industry that has gone from ada to C++ and the result is ugly. ever see a domain expert (where domain != C++, but some specific industry or engineering field) write applications in C++? oh, and for safety those non-C++ experts need to follow these rules to avoid some of the pitfalls of C++ (which many people will dismiss as really being legacy C problems, as if that eliminates them from C++). and for security they need to avoid these weaknesses (and yes, java has it's own list here, but fewer are specific to the language and not some specific domain like mobile code, struts, or J2EE). it's not pretty. but probably has something to do with the right person (domain expert) using the wrong tool (systems PL) for the job (applications).

and to bring it back on topic, the crosstalk article barks up the wrong tree (but at least throws it a bone): the problem isn't languages. but here's where the article falls short: the problem is the curriculum and the educators who choose it and teach it. if the goal is attracting students with the promise of high-paying "web 2.0" jobs, then it doesn't matter if you teach ada, smalltalk, or even prolog (all three taught in my PL survey course in college), the students are only going to get a "trade school" education. but a good teacher can teach sound principles using any decent language (though more easily with some than others). the emphasis should be on fundamentals, and whatever PL is used should not be the ends, but a means to the end.

students only learning to leverage java libraries? disallow their use! (in my college days that was called plagiarism. ;-) students only interested in "web programming"? give other assignments! if an elementary school teacher allows kids to learn basic arithmetic poorly using calculators, then don't blame the calculator. and don't say HP calculators are evil and should never be allowed in schools because they cause high school math teachers to only teach how to play tetris on them instead of calculus. the problem isn't the tools, it's the educators and what/how they're teaching!

okay, enough of my ranting, and nothing personal andhow, but i had read that article and other poorly-directed opinions about PL choice in education before and your comment was a good segue. ;-)

beyond introductory

the goal of an introductory "programming" course should be to teach the basics

And after that introduction, I believe a sizable segment of the industry would appreciate some low level experience. More than believe, actually; this is something they are loudly complaining about.

Why focus on the industry?

Why focus on the industry? The majority of courses in mathematics and physics isn't directly relevant for the industry.

It's where the fun is!

It's where the fun is!

Not to the exclusion of all other purposes

Not to the exclusion of all other purposes, but I think a partial focus is warranted. Mathematics and physics are relieved of such practical burdens by the existence of other degrees which are wholly aimed at the industry. In the absence of a separate "computer engineering" degree which assumes this responsibility, what other degree would? IT?

Regardless industry demand, experience with systems programming is valuable for understanding how things work, along the same lines as computer architecture, compiler construction and operating systems.

Software engineering

IT being a subspecialty of SWE and business.

Unlike other engineering curricula, SWE/CS tends to focus more on tools and less on the underlying methodology; at least that appears to be the case at many places.

Of course, industrial practice at many software houses often doesn't resemble "engineering" in other disciplines; though the work-product in most cases isn't mission-critical (at least not in the sense that "if it fails, someone might die").

But still--many people who hire programmers are not terribly interested in either professional engineering practice, or theoretical knowledge--they just want folks who can turn specs to code.

Contextual Background

In the complete opposite of Ehud, I'm not stunned in the least (I saw an early version years ago), and I may go on for a bit. I'll explain the former and hope some tolerance for the latter.

I was a CS undergrad at A&M when Stroustrup joined the department. One pertinent thing to note is why one of the highest profile language designer/maintainers would join A&M, given its not-quite-top-20 rank in CS. Sidestepping the fat stacks of cash thing, his response was basically that even though he's a computer scientist, his clients are much closer to the metal. That means it's vital for him to be at a place with top-notch engineering, which A&M has in spades. In particular, he gains connexions to A&M's more highly ranked CE and EE programs, conveniently located in the same college.
(Note this means all engineers can take CS classes that count towards an engineering degree on all the formal accreditation measurements. I've heard this cited at my graduate institution of much higher CS rank, UT, as the reason the engineering majors can't learn to program in the natural-science-college-based CS program there.)

The above all makes sense, as anyone talking to the metal has very little option outside of the C family. Stroustrup even went so far as to teach the introductory engineering course for CE and EE majors. Unfortunately, the anecdotal evidence I received was not positive; his prowress at the subject got in the way of his ability to convey the material to beginners. The table of contents reflects exactly the summary I heard years ago, that he tends to use an approach based on introducing complicated concepts early and fast. Hopefully the input of the late "Pete" Peterson aided him in mitigating this; as noted by another poster, Stroustrup's TOC may not be indicative of underlying quality.

Utilizing the above to read into the subtext of the preface, you can now suss the real title of the book: how to get CE majors programming in C++ as fast as possible. The choices of topic should make more sense in this light, as the students are expected to pick up math, CS, and computer organization elsewhere. The book is trying to be the fastest way to getting the engineers to cranking out the C++ programs requisite for the rest of their careers. Despite its connexion to language education, the book in question is aimed towards a context generally disparate from that of the language hackers populating LtU.

Thus ends my historical and anecdotal filling-in of background for those who didn't have the inimitable pleasure of attending A&M CS a few years ago. I was planning on turning now to some personal, spite-filled invective on the political process running behind all this and my gut-wrenching despise of using C++ in introductory courses. But unless that's where the comments go, I think I'll keep it to the forums.

contextual refinement

Unfortunately, the anecdotal evidence I received was not positive; his prowess at the subject got in the way of his ability to convey the material to beginners.
I was a peer teacher (TA light) for that class the first time it was taught. Your anecdote is from the point-of-view of a (1) non-CPSC major who (2) did not believe they would use C++ again and who was (3) being given ONE credit hour for the course (as opposed to the four one would expect of a lab course that requiring the same amount of work) in a (4) enormous class (maybe 300 people). These things certainly strained the learning environment. On the other hand, I also saw quite a few students who got a 10x better introduction to programming than I know I had.

Secondly, I would venture to say that the goal of the book is not speed, but to avoid the usual topological ordering of material that tends to bore the students with minutiae early and only go somewhere interesting a few weeks before the course is over. There is not a lot of area under that curve. This approach ends up introducing things like standard containers, algorithms, and (very simple, mind you) GUIs early, but it is done for pedagogical reasons, not latency.

C++?

With all due respect to Bjarne Stroustrup (and I do respect him greatly), I think C++ is a very poor choice for an introductory language. I've taught a number of languages to a large number of beginning or near-beginning programmers, including Scheme, C, Java and Python (and also C++, but not generally to students who didn't already know C). The problem with C++ as an introductory language is that it's so damn complex, and so much of the complexity is of an ad-hoc nature (little details to be memorized, as opposed to big new concepts, like you would find teaching Scheme or Haskell). Memory management alone is a huge problem in C++. Understanding the subtle differences between references and values is another big source of confusion. Frankly, I can hardly think of any language less suitable for beginners than C++.

What we do at Caltech for students who want to learn C++ is to teach them C first, so they get a good grounding in low-level programming (pointers, addresses, manual memory allocation, the difference between the stack and the heap, etc.) and then let them tackle C++. Even so, it's difficult to teach and I've been happy to leave this duty to other instructors.

Resource management

I would argue about the specific case of memory management:

Assuming that manual memory management, being a specific case of manual resource management, is a 'big new concept' and a necessary task, even in languages with GC: the structured approach allowed by C++ (through the unfortunately-named RAII idiom) is an evolutionary step forward and a unique ability of C++ due to its particular definition of object lifetimes and destructors. Perhaps RAII will become less unique to C++ over time, though, as features like the 'using' statement in C# are first steps in this direction.

While GC is good

resource management should not be ignored, GC does not work with scarce resources. On embedded systems that can even include memory, but in the desktop/server environment (where memory and store is plentiful), you still have to worry about things like database connections, file handles, graphics contexts, and other system resources (many of which have nontrivial cleanup semantics) are not suitable for GC--which is why one finds finalize() methods, finally blocks(), and other such things even in GC-enabled languages--to permit cleanup of these things. (And even though garbage collection can handle reclaiming of an objects memories; standard programming practice in Java and similar languages, when dealing with objects which reify scarce resources, calls for manual deallocation of the resource; depending on the GC to clean up for you is considered Bad Practice, as there is no guarantee it will actually reclaim the object/resource in question).

The good thing about dispose() as opposed to delete, is that double-disposing() of a resources is usually not a problem (whereas double-deleting an object in C++ is a big problem).

Not so unique

I'm with you on the necessity of teaching resource management in general.

That said, lexically-scoped resource lifetimes are a piece of cake in languages with higher-order functions or a good working equivalent - you write a function withResource which allocates the resource, passes it in to the function it expects as a parameter, then frees it afterwards. A further refinement (seen in the standard library of modern-day Haskell, amongst other places) is to use the type system to prevent dangling references.

Someone else will have to add historical references for that pattern - I'd be entirely unsurprised to discover it's older than I am!

RAII extends farther than

RAII extends farther than just lexically-scoped resource lifetimes, although lexically-scoped is the most common example. Consider a list<vector<string>>. The lifetime of each individual string isn't lexically determined. What we know is that the string's lifetime is a subset of the vector's which, recursively, is a subset of the list's. The list may even be dynamically managed in a non-RAII manner. RAII is about this hierarchical resource management. In programs (videogames :) where I've tried to apply RAII throughout, the entire program becomes one such hierarchy, so restarting a level is literally "delete game; game = new Game(...);". Not to say that this object lifetime behavior can't be simulated in another manner, it's just nicely integrated into C++.

I see no points in

I see no points in criticizing C++ so hard. It is a powerful language, everyone has to admit that, so it has to be complex. However,it can still be taught in an easy way and that's what the book for. The problems you mention are only misconceptions. For memory management, C++98 has auto_ptr and the coming C++09 is going to add shared_ptr and weak_ptr. If you look at Boost( I hope you look there before talking about writing code in C++), you will get more pointers and more utilities which complement the Standard Library. This time, C++ is also going to have concurrency, adding to its power in system programming. I see no points in not teaching first year students C++. It will make them have a good view on what is true programming.

Expressiveness =/= Complexity

It is a powerful language [...], so it has to be complex

Why?

In fact, expressive power often is gained by removing complexity, e.g. in the form of ad-hoc restrictions or redundant concepts.

I see no points in

I see no points in criticizing C++ so hard. It is a powerful language, everyone has to admit that, so it has to be complex.

Forgive me for saying so, but I nearly spit coffee all over the place upon reading that.

The interesting part of C++0x is move semantics

The interesting part of C++0x isn't shared_ptr, it's move semantics.
It allows fixing a big problem of the language, and also allows to introduce the elegant smart pointer `unique_ptr' as a replacement to auto_ptr (auto_ptr moves on copy; unique_ptr moves on move and prevents copy)

IMHO, shared_ptr is pure evil and shouldn't be used. It had its uses in C++03, but in C++0x it's nearly an anti-pattern.
I have seen it a lot to put pointers in containers for example, because it was both less intrusive and more efficient than a deep-copying smart pointer, the more valid alternative. But now, with move semantics, containers are not only more efficient but also do not require copiability but only movability, which is trivial for a pointer. COW and similar techniques based on sharing have become useless.

Although it's a totally

Although it's a totally different subject, I would like to point out that C++ programmers will be complaining in the coming years about move semantics, because their objects will be 'randomly' deleted (see the auto_ptr situation for an example of how move semantics are bad).

I would love to see how Stroustrup will tackle that in a beginner's book.

Not unless they explicitly used std::move

The only time an object will implicitly be the source of a move construction, assignment, etc is when it is about to be destroyed. Thus the only user code that will be able to observe the post-move source is the moved-from class's own destructor, which, having implemented a move constructor, clearly expects the move.

Only if the user says std::move(a) will an object be moved from that is not about to be destroyed. So nothing will be 'randomly' deleted.

std::auto_ptr is commonly described to have move semantics, but for exactly the case I believe you are referring to, it does not. std::unique_ptr, on the other hand, does not have a copy constructor, just a move constructor, and it does have move semantics and none of the problems you are referring to.

So, although I am reluctant to respond to your flame-y second comment, he wouldn't even have to address it in a beginner book: unless the user wants to create a move constructor or call std::move, they can be oblivious while still receiving the performance benefits.

auto_ptr was deficient; move semantics are not bad

auto_ptr was deficient and was doing it the wrong way, and that's why it's being made deprecated.
There is absolutely *no* problem whatsoever with its replacement, unique_ptr.
Moving is only done from rvalues (or through an explicit move function call, which casts to rvalue) which is perfectly safe since those are temporaries which cannot be used afterwise and that will be destroyed.

See the various "rvalue references" C++0x papers for reference.
It adds a reference to rvalue type, which allows making it distinct from an lvalue and thus implement move semantics correctly.

And this is related to your previous post, since you were talking about both auto_ptr and some part of the next standard, shared_ptr, actually saying those are good utilities.
One is deprecated and should never be used, the other is pure evil. I believe nuancing your opinion was thus fairly important.

.Net

Just thought I'd point out that schools in this part of the country are teaching mainly .Net, and not much Java.

Not that it's really relevant... just thought I'd point it out.

in this part of the country ?

in this part of the country

?

According to his home page

Max hails from Missouri. (And is apparently related to, and named after, the late German boxer).

[Admin]

A general discussion on the "quality" of C++ (or any other language, for that matter) is too unfocused to match the LtU policies or spirit. Please keep those policies in mind (links to the policy documents and related discussions are in the FAQ).

I don't mean to cause a

I don't mean to cause a language discrimination. However, my point is that it is not impossible to teach C++ to first-year students in a comprehensible way. Every language has its traps and pitfalls that users have to know. Stroustrup's idea is that the new book will make students have the right attitude towards programming in the first place. That's what "how to" books often lack, a focus not on particular features but the practice of programming as a whole.