Computer Science Education: Where Are the Software Engineers of Tomorrow?

A short article by Robert Dewar and Edmond Schonberg. The authors claim that Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods. We consider that the general adoption of Java as a first programming language is in part responsible for this decline, but also explain why - in their opinion - C, C++, Lisp, Ada and even Java are all crucial for the education of software engineers.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Chicken, egg, and a nit

Which is chicken and which is egg:

- Do we not teach formal methods (much) in university because we don't use them (much) in industry?

- Or do we not use them (much) in industry because we don't teach them (much) in university?

One nit:
> [...] Such languages (Javascript, PHP, Atlas) are indeed popular
> tools of today for Web applications. Such languages have all the
> pedagogical defaults that we ascribe to Java and provide no
> opportunity to learn algorithms and performance analysis.

I don't know about PHP or Atlas, but you certainly can learn algorithms and performance analysis in javascript. I remember many years ago demonstrating different algorithms for identifying prime numbers in javascript, ending with the sieve of Eratosthenes. Recently I helped a colleague implement binary search in javascript to maintain a sorted array. There are a lot of things javacript isn't suitable for, and there are aspects of performance analysis I wouldn't use javascript to demonstrate (e.g. cache-oblivious algorithms), but there's a lot you can do with it nevertheless, and it has the advantage of being universal (comes with every modern browser).

I like the idea of

I like the idea of integrating Lightweight Formal Methods into the CS curriculum.

I've noticed locally (in New

I've noticed locally (in New Zealand) that attempts to integrate Lightweight FM into the curriculum appear to be at odds with the increasingly applied and (dare I say) vocational nature of CS education. It's unfortunate, because the FM elements were the parts of my formal undergraduate education that I most enjoyed!

Lightweight Formal Methods in CS curriculum

We teach Lightweight Formal Methods here at University College Dublin as part of several courses in our degree. I have done so at each University at which I have taught (e.g., TU-Eindhoven, Caltech, etc.) over the past ten years, in fact.

By integrating good practices into the curriculum (problem solving, analysis, design, programming, debugging, etc.), and by not calling them formal methods, and finally, by supporting those practices with quality tools, students integrate formal methods into their concepts, tools, and techniques without really knowing it.

I have written a few papers about this, the most recent of which is called "Secret Ninja Formal Methods". It is currently under review at a major conference, but if LTU readers are interested, drop me a line and I'll send you a copy.

...students integrate formal

...students integrate formal methods into their concepts, tools, and techniques without really knowing it.

Yes, that's the type of thing I had in mind.

It might be worth mentioning that by exposing students to languages with rich type systems you achieve at least part of this goal, since reasoning about types (and, methodologically, typeful programming) is an integral part of programming in these languages.

Re: SNFM paper

skimming it, i'm so happy yet sad that the classes are better equipped than most any company or team i've ever toiled at or with. wow. this stuff sounds so great to this dying of thirst in the formal methods desert that is apparently the mainstream software development world guy. please, get that stuff out to the wide world, open source it all, advertise it, etc.

...and when/will there be a night school version taught somewhere in the sf bay area i could sign up for?!

Re: SNFM paper comments

Hi Raoul,

Thanks for your kind words about the paper and this work.

Nearly all of the tools and technology that are mentioned in the paper are available today and are Free/Open Source. They are being integrated into the Mobius Program Verification Environment (PVE) at the moment. A full public release of the PVE is scheduled for March, which will include all pedagogical materials, tutorials, demonstrations, etc.

Early releases of the various subsystems are available from a variety of sources (e.g., directly from Iowa State via SourceForge for JML, via UCD's old GForge for earlier versions of ESC/Java2, via SourceForge for the original EBON tool suite, etc.).

Other subsystems that are under development are initially released either via public CDEs like SourceForge or via our general public Trac server or the public Mobius Trac server, all of which are announced via FreshMeat. In particular, a new BON scanner/parser/typechecker is being finished at this moment.

As the paper discusses, all of these tools and techniques are used in industry and academic education for many years. For example, over a hundred student projects are archived on the UCD GForge, many of which use these technologies. Likewise, all of our course materials are public and available via our old and new Moodle servers.

I am designing a postgraduate course now that will be offered next year that covers these topics. As usual, I'll run that course in such a way that remote students can audit it. We have also been discussing offering a remote MSc in Computer Science from UCD, one specialization of which might be in dependable systems, run by yours truly.

Best,
Joe Kiniry

education of industry programmers

Not everyone in industry has had formal education in CS. So if you have someone that has, and 20 that haven't, it might be that you structure your working processes to those that haven't.

Industrial use will follow publicized and explicated examples

The problem is that there are two educational needs:

FIRST, education on successful large-scale applications of the techniques.

Second, education on the techniques themselves and how to apply them.

This will strike many people as backwards. It makes more sense to first teach people the techniques and how to apply them, which means starting with simple examples and clear theoretical models. Then, once they have a firm grip on theory and are capable of solving simple problems, they will be ready to tackle real problems. This is the standard model for teaching any kind of technical theory or skill.

However, since formal methods are not the norm, software engineers are aware of multitudes of large-scale, mission-critical (etc. etc.) projects that succeed without formal methods, and they will probably never hear of projects that actually use them unless they specifically seek them out. This imbalance creates a correspondingly large burden of proof for proponents of formal methods.

The remedy is to publicize case studies of large-scale successful applications of formal methods, making a convincing argument for the role of formal methods in their success. You have to *prove* to software engineers that formal methods are valuable in large-scale problems. This is not fair at all; software engineers put up less resistance to most other theoretical tools, because they know they are used. Finite state machines, regular languages, type theory, programmers may think ugh, this is hard and look for excuses not to learn them, but deep down they know that they live in a universe created by people who understand those things. Formal methods can't count on that. Formal methods, regardless of history, has to be pitched just as if it were a new and unproven idea.

Great argument...

Too bad it doesn't lead to their conclusion.

Java is too easy for programming education. True enough, Joel Spolsky covered this a couple years ago. Students should get exposure to a diversity of programming language. Also true enough, almost a corollary of the Java problem.

But teaching students formal methods will improve matters because...? The only reason given is "We like them and are are personally invested in them."

I had my first software job in a formal methods company. I took my undergrad CS classes in a department headed by the president of said company. I spent years worshipping Dijkstra and Harlan Mills, and clinging to the ideology that by golly, what the software industry needs is more theorem-provers.

Lies, lies, lies! As Perlis once said, the difficulty with proving programs correct is that they often aren't. Programmers face an imperfect world. They use third-party code. They have time-to-market constraints. Requirements change. I see no point in training programmers for a software industry that doesn't and can't exist.

By all means teach students the lessons of the structured programming trend (its strengths and weaknesses). Teach them that there's more to code correctness than "it seems right." Teach them about the THERAC-25. But for God's sake don't teach them that understanding axioms and quantifiers will make them super-programmers in the industry. They might believe it.

Can't argue with that...

Lies, lies, lies! As Perlis once said, the difficulty with proving programs correct is that they often aren't. Programmers face an imperfect world. They use third-party code. They have time-to-market constraints. Requirements change. I see no point in training programmers for a software industry that doesn't and can't exist.

These things are all true of other engineering disciplines, and yet mathematics is successfully used in those disciplines to increase confidence in a design. And really, that's the difference: most engineers don't set out to prove their design correct - they set out to obtain a greater level of confidence in the correctness of certain aspects of the design (with the amount of effort that gets spent on gaining confidence being generally proportional to the criticality of the system). As you rightly point out, the realities of the software industry are such that holy grail of formally proving the correctness of a program is generally infeasible (for much the same reason that formally proving the "correctness" of a satellite is infeasible).

Instead of worrying about the fact that they can't use mathematics to "prove" their designs are correct, I'd like to see software engineers act like... well... engineers. By which I mean that I'd like to see more emphasis placed on formal analysis of designs, with the realization that the goal of such analysis isn't a proof that the final program is correct in all respects, but to catch and eliminate errors in the design as early as possible. I think that Ehud is on the right track when he suggests that Lightweight Formal Methods (which to me seems to be another way of saying "pragmatic application of formal methods") get taught in schools. Instead of trying to teach students to use theorem-provers to rigorously prove a sorting algorithm correct, let's teach them to use something like Alloy Analyzer to nail down the essential logic of their program, or Spin to get a better handle on the possible interleavings in their task interaction design and find problems with that design, or to make full use of the type system (and/or additional annotations) to statically catch as many bugs as possible. From what I can tell, this approach is roughly what Praxis uses for "Correct-by-Construction" development, and they seem to have had great success with it.

I couldn't agree more

It drives me nuts that I'm called a Software Engineer, when what I do has nothing in common with engineering, But it could!

Definition of engineering?

What is your definition of engineering? I think that what we do (when building software) is engineering. It just that mathematics has not yet caught up to the practice.

Engineers are paid to build useful things. If you can use science and math to help you that's great - but the one paying you does not care. Imagine telling your customer - "Your system cannot be build, because the math needed to prove it correct has not been invented yet. Just wait 100 years" :)

In the history of engineering there were many times when the practice got ahead of science (eg. bridge building etc). The engineers just go ahead an "hack" (see the books by Henry Petrowski).

Often times, engineers *are* scientists

Especially in CS, where many interesting developments have come from industry rather than academia. And many other engineering disciplines draw from many different branches of science rather than one--in the case of bridge building (civil engineering), there's a combination of physics, materials science, soil science, and what-have-you. Physics classes certainly don't cover topics such as the difference between a standard arch, tied arch, or suspension bridge--yet there's a lot of abstract knowledge (i.e highly domain-specific science) in bridge-building that is only taught in the civil engineering department.

So it often is with CS and SE. Parts of CS are highly mathematical and formal; other parts are highly empirical. In my mind the distincition between the engineering and the science is not the boundary between the formal and the empirical, but the difference between the abstract knowledge, and the use of said knowledge to build things for a specific purpose.

And many (most) engineers *are* trained in scientific methodology and practice, although someone with a BSEE probably won't be as proficient in research methodology as a post-doc in CS or physics.

Getting back to formal methods--one of the big issues with use of formal methods to prove something correct, is an accurate specification of what "correct" is. One can prove something to be consistent (or inconsistent) with itself, in the absence of any external requirements--but proving that a program does what the customer wants is difficult when "what the customer wants" isn't described formally.

Requirements, scientists and engineers


...in the absence of any external requirements--but proving that a program does what the customer wants is difficult when "what the customer wants" isn't described formally.

Never mind describing it formally. In many cases the customer thinks they know what they want, but they cannot see other (possibly better ways) doing what they need.

For example, I read somewhere that early tractors had reigns so they could be steered just like a horse. Or for that matter, who was the first person who said "I need a web browser?".

There is saying that I like that says:

"Scientists discover what is, engineers build what never was."

a people's history of computer science

In comment-39111, Scott Johnson wrote:

Oftentimes, engineers are scientists

Reminds me of A People's History of Science: Miners, Midwives, and "Low Mechanicks" by Clifford D. Conner.

Bridges and liability

The real problem is that if a bridge falls down, even if no one is injured, someone at the engineering company is going to pay. Software, on the other hand, falls over all the time - and the consequences are often financial (think: multiple security bugs in IE allowing keystroke loggers to be installed), occasionally fatal.

There's a curious discrepency in current law in most of the Western world: If I sell you a sat nav device which causes you to drive into a river and drown, I'm liable. If I sell you some sat nav software which you install on your PDA which causes you to drive into a river, it's your fault.

This must change. The change will be painful.

Rich.

If I sell you a sat nav

If I sell you a sat nav device which causes you to drive into a river and drown, I'm liable. If I sell you some sat nav software which you install on your PDA which causes you to drive into a river, it's your fault.

In the former, you built the hardware and the software and thus are liable for failure from either individually, or the combination. Since you can't control the hardware on which someone runs your "pure software" (ie. PDA, desktop, homebuilt PIC, etc.), how can you justify holding the software vendor liable? The vendor cannot control for all of those variables. Can you reliably demonstrate that it was not a hardware bug or quirk that the software exposed?

In a software liability world, all software vendors would become Apple, with a single supported hardware configuration for liability purposes, with all the monopolistic pitfalls that entails. I think liability for software is simply unrealistic, and further, I think we're better of without it. Better to provide liability only for those applications that really need it.

Examine the case

Not really, because when the case came to court, it would be examined by experts on both sides to try to prove whether it was a hardware or software fault.

The real issue is about free software -- particularly someone who gives away some software on their website. Can they be held liable if (a) someone downloads and uses the software and is then injured, or (b) a third party repackages their software and sells it to someone who is injured.

Rich.

Not really, because when the

Not really, because when the case came to court, it would be examined by experts on both sides to try to prove whether it was a hardware or software fault.

Such litigation is costly. The first such suit would drive companies to lock their software implementations to a single "reference system", to which everyone must conform ala Apple, just like I said. If you think the Microsoft "monopoly" is bad now, you ain't seen nothing yet.

The open software ecology we currently enjoy is much better, even crippled as it is by Microsoft's "monopoly". Once free software improves enough to finally and truly break the Microsoft stranglehold, I think we shall truly reap the rewards.

Pay no attention to the elephant

Lies, lies, lies! As Perlis once said, the difficulty with proving programs correct is that they often aren't. Programmers face an imperfect world. They use third-party code. They have time-to-market constraints. Requirements change. I see no point in training programmers for a software industry that doesn't and can't exist.

Although the article lists some uses of formal methods in industry, it doesn't explicitly address what a small fraction of the industry that usage represents. That small fraction means that most software professionals are never going to see the use of formal methods in their careers, if the status quo holds. I agree, that's quite an elephant in the room for them to be ignoring.

At the very least, a better argument is needed for the benefits of teaching (presumably lightweight) formal methods even to people who aren't going into the sorts of mission critical industries they're focused on. And no, "fear of terrorist cyber attacks" doesn't cut it, in fact it makes me want to grab them and shake them and ask them if funding for their pet discipline is really worth stooping that low. (Sorry, that just annoys me.)

Yuck. I didn't notice that.

Yuck. I didn't notice that.

No elephant! Their argument....

They have a simple, economic argument, actually.

Today's students, they opine, are "easily replacable". They acquire easily learned skils.

How can they differentiate? One of the ways listed is formal methods training. This requires greater training (so, less easily replaced workers). In domains where it has been applied, it has (mostly) repeatably, markedly improved software quality (people say). Therefore, these less easily replaced workers can find increased demand if they find ways to compete on quality, rather than price in domains where formal methods can be applied.

Some might say that these students won't find demand for their skills: that employers are always trying to lower costs at the expense of quality. Perhaps, but such firms will find it increasingly difficult to compete globally. So if academia produces a greater supply of these additional skills, we have to pretend to assume that industry will learn what to make with this new raw material.

-t

Re: global competition

I do keep waiting for the morning when people in the West wake up to realize that places like India or Romania have surpassed them by actually using computer science to get things done. Too bad I'm in the West.

Yup.

Wait, you mean that the west's ivory towers have no monopoly on wisdom? Now, hang on a minute.... :-)

The existence of smartness in larger populations in labor markets outside the US is kind of orthogonal to the arguments of the paper in the article. That's perfectly fine. There's plenty of room at a (hypothetical) "top" for the US's dinky population so we should, the paper suggests, try to max out on that rather than wasting resources in a vastly more competitive lower-level market.

-t

It's not really a problem.

It's not really a problem. We'll just import it anyway.

Right on

I just wish I could find a good job for me that followed that kind of philosophy. (Guess I need to drink a lot of coffee and burn the midnight oil and go make that job exist myself somehow?)

JH said:As Perlis once

JH said:

As Perlis once said, the difficulty with proving programs correct is that they often aren't. Programmers face an imperfect world. They use third-party code. They have time-to-market constraints. Requirements change. I see no point in training programmers for a software industry that doesn't and can't exist.

...seeming to suggest that the authors had suggested that programming is all about writing code and then proving it correct, using only libraries that conform to their formal specifications.
The article said:

Let us propose the following principle: The irresistible beauty of programming consists in the reduction of complex formal processes to a very small set of primitive operations. Java, instead of exposing this beauty, encourages the programmer to approach problem-solving like a plumber in a hardware store: by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want. How it does it is not interesting! The result is a student who knows how to put a simple program together, but does not know how to program. A further pitfall of the early use of Java libraries and frameworks is that it is impossible for the student to develop a sense of the run-time cost of what is written because it is extremely hard to know what any method call will eventually execute. A lucid analysis of the problem is presented in [4].

The point of the article, this quote seems to illustrate, lies not in trying to repeat a push by academia to force industry to accept formal methods, or academically correct PLs. It is about improving the curriculum to raise the proportion of programmers who understand what they are doing from the inadequate levels we have at present.

Another thing that would be useful to this end, I think, would be to distinguish better in CS course offerings between their scientific objectives and their engineering objectives. My impression is that EE courses do a better job in this respect.

Abstraction

The above comment (the comment in the article, not the poster's remarks) seem almost to be a denouncement of abstraction (modularity, composition, etc.).

And indeed, there is a longstanding tension between the desire for abstraction (not needing to understand the underlying machine), and the fact that practical programs are systems which include the machine as a key component. The authors are taking the position that abstraction is a Bad Thing; and they do have some points (not that I agree with them entirely--I don't).

* "Cyber terrorists" and other malefactors, real or imagined :), generally have an excellent understanding of the inner workings of the systems they attack. They know how to force it to consume resources in order to initiate a denial-of-service. They know what out-of-domain inputs to the system (complex systems are all partial functions) will ilicit the desired "undefined" behavior. Hackers are all about studying the inner workings of things to find holes to exploit. Construction of secure systems requires a similar understanding of the guts of the thing (whether by formal or informal analysis), and that includes all components--from the highest level application scripts written in a language that starts with P, to the kernel and drivers of the OS, to the firmware in the peripherals, to the hardware itself. Programmers who assemble systems from pre-packaged components will generally lack that understanding; as time-to-market pressures and such will cause them to elide such analyses, and component vendors will often view the necessary implementation details as proprietary.

* Even absent security concerns, there are other issues such as performance and resource consumption that require a deeper understanding of systems than at the API level.

That said, most things that most programmers write--don't need to be subject to rigorous system-level analysis; and software engineering practices which encourage modularity and reuse are highly inappropriate. And, to reject a canard one occasionally hears--I should note that builders of systems which are required to be secure and/or mission-critical are those who should assume responsibility for ensuring that they are--people who write text editors for their own amusement should not be required or pressured to assume or worry about the risk of someone else using their tool for development of mission-critical apps.

The dichotomy you draw is, I

The dichotomy you draw is, I think you'll agree, simplified - there is a continuum, from "quick and dirty" to "mission critical", and different systems (at different times) occupy different places on this continuum.
Engineering, I would assume, should involve understanding this continuum, knowing where your systems belongs on it, and understanding what this entails in terms of how the software should be constructed.

Exactly.

Well-said.

The trouble is--much of the current SW infrastructure was developed without this understanding (or despite it). Though I suspect that if there was some (legal or otherwise) that infrastructure were to be "done right"; we would not be having this conversation on this website, as the necessary infrastructure would be too expensive to exist, or depend on technologies which we don't have enough of. :)

Having an Internet which is subject to hostile attacks of all sorts is better than having none at all (or one which is restricted to a few players).

Sometimes, the best way to develop systems is to start with quick-and-dirty, and incremntally improve things to the desired level of robustness.

With that in mind, here's a thought experiment for the readers:

Right now, the most common operating systems found in desktop or server environments are written in C or a derivative. Most Unix flavors are written in C (including Linux). MacOS is essentially a BSD derivative (C) with an Objective-C based GUI environment. Windows is largely written in C/C++.

How soon before we will see a widely-deployed OS written in a "safe" language (including dynamically-typed languages)--either executing on hardware or on a VM? There's lots of experimental OS's written in all sorts of languages--some of which are reasonably stable and portable--but none has ever threatened to displace any of the major OS families.

Interestingly enough, this may be the thing that saves Microsoft, <rant>iff the company ever wakes up from its monopoly-induced slumber and starts paying attention to what its customers want (Windows Vista it ain't), rather than what will enable future vendor lock-in and/or benefit certain business partners in Hollyweird</rant> The Linux community (and the larger Unix community, it seems) has too many C jocks among its members--and a perhaps-false sense of security which comes from being regarded as better-than-MS when it comes to security issues. (The free software community also has some legitimate concerns about "secure" systems being used to benefit parties other than users). Apple is an apps software company, first and foremost. MS has, at least, the Singularity product going being developed at MS Research; the interesting question: Would MS ever release an OS based on Singularity, and when?

Even worse

i guess better tends to cost more, and most nobody is willing to pay for it when things work "well enough"... and then the same folks are wailing and gnashing teeth when things get hacked cracked dosd or just plain trip over their own bugs.

tra la.

gee, maybe i wish House would take off. :-)

Construction of secure

Construction of secure systems requires a similar understanding of the guts of the thing (whether by formal or informal analysis), and that includes all components--from the highest level application scripts written in a language that starts with P, to the kernel and drivers of the OS, to the firmware in the peripherals, to the hardware itself.

It sounds like you are implying that an audit requires inspection of all source code, when this is really not the case. For instance, capability-secure systems vastly decrease the amount of code that requires auditing. If we can rely on the platform enforcing capability rules, then modularity and abstraction actually increase the system' auditability.

One of the desires of formal systems

is to avoid "audits"--by "audit" I assume you mean human experts examining the code, possbily assisted by machine. Audits, of course, don't scale well.

I'm unfortunately not entirely familiar with the capability of capabilities (pardon the pun) to ensure system security. I'll grant that capabilities are good at ensuring that users or processes (or their proxies) don't perform operations on a system that they aren't entitled to perform. But how do capabilities deal with, for example, resource allocation issues? Can quotas be modeled as capabilities? Do capabilties mix well with substructural type systems (the sort of type system which can enforce accounting rules)? I'm not doubting, just asking for more info.

To tie in with another post in this thread--most of our existing computing infrastructure is not capability-secure. How long before it is?

is to avoid "audits"--by

is to avoid "audits"--by "audit" I assume you mean human experts examining the code, possbily assisted by machine. Audits, of course, don't scale well.

Yet. ;-)

Seriously though, a profound audit is not warranted for most software, but developers audit their code all the time while inspecting it, extending it, fixing it, etc. The easier you can reason about effects in a language, the easier it is to reason about its security properties. Capabilities just make that part easy, since references are permissions, auditing is a normal part of reasoning about your program's behaviour.

But how do capabilities deal with, for example, resource allocation issues?

Now that's an excellent question, and one I'm also very interested in. Mark Miller's thesis on E touches on solutions to denial of service attacks in capability systems, mostly in the context of distributed systems if I recall correctly. The solutions generally seem to consist of processes, which E calls Vats, possibly with quotas. This adds a level of dynamism to a system where even well-typed programs can go wrong. Whether resource exhaustion is amenable to a static analysis to resolve that is one of my ongoing research projects. Perhaps you know of something?

Do capabilties mix well with substructural type systems (the sort of type system which can enforce accounting rules)?

I've wondered the same myself, and I started the thread on substructural type systems for region-based memory management, which uses a type of capability-like security token to track region lifetimes. This is another avenue I've been exploring to solve the resource exhaustion problems alluded to above.

To tie in with another post in this thread--most of our existing computing infrastructure is not capability-secure. How long before it is?

Also a good question, to which I have 2 answers:

  1. With increased expressivness, comes increased ease of reimplementing of existing software
  2. Capability languages are fully "virtualizable", whereby a "system"/TCB object is indistinguishable from a user-provided object. Consider now such a virtualizable capability-secure language with staging. The ambient authorities available in all other languages, such as file_open, consist of binding the runtime file_open capability from stage N to the ambient file_open function in stage N+1. In this way, you could host islands of fully isolated, legacy ambient authority systems within a capability-secure sea.
That's the approach I'm currently heading in.

m' yeah, I'm going to need you to...

I'm going to quote your quote:

...by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want. How it does it is not interesting! The result is a student who knows how to put a simple program together, but does not know how to program.

One of the major complaints by software 'practitioners' against nice yet unpopular languages is that they don't have all the drawers. Just like when we say not having the drawers doesn't make a language bad, I'd say that having the drawers doesn't mean that one has to use them all while teaching. They can forbid use of any package that intervenes with teaching, runtime cost analysis, etc. They can find another way to teach pointers.

It's not that Java turns you into a plumber by its mere presence. The institution intends to produce plumbers and doesn't care. I can't blame Java here. We need to find the source who actually downplays the principles we value here. (*cough* Bill Lumbergh *cough*)

But remember....

being a plumber is not a *bad* thing. If my house has a leaky pipe and I call in the plumber, I want him to bring a piece of replacement pipe, size it to fit, and install it.

I don't want him to bring along a portable smithy in his truck along with a supply of metal, and forge a pipe of the exact length.

OTOH, there are applications where a custom-forged pipe might be the best solution.

Far too many people assume all problem domains are like theirs (or should be)--and that the solutions that fit their problem domain should be universal.

<rant>

As far as Bill Lumbergh goes, he's just a symptom of the problem.

Bill Lumbergh, and his pointy-haired middle-management peers, aren't the ones defining university curricula. And even the competent middle managers (let alone the incompetent ones) are generally delivering what they are instructed to deliver--software that's good enough to fool customers at the point of sale, prior to the next trade show|holiday season|customer roll-out, and on a shoestring budget. Maintenance? We can't afford to worry about that--if we don't make the quarter's target, we may not be around to care any more.

Managers who deviate from the prescribed course are generally whacked as soon as something goes wrong. Companies like Tek in the 80s, HP in the 90s, or Google today are the exception and not the rule--and sooner or later have their lunches eaten when they stumble, and some financial wizard decides that their developer-friendly corporate culture is an extravagance no longer worth maintaining. (After all, if attracting the best talent doesn't deliver superior financial results, what's the point, when an industry-average workforce can be had for half the price?)

Everyone says they want quality. Damn few are willing to pay for it. LtU, and similar research-oriented activities/forums/publications in CS are all ultimately about bringing the cost (financial and schedule) of quality down to a level that more and more pointy-hairs are willing to pay for.

But ddon't be surprised, folks, when our reward for those efforts which bear fruit (speaking for the programming community as a whole) is even smaller budgets, more compressed schedules, and even more byzantine and incomprehensible requirements.

In short, the reason software sucks is because the constant race to the bottom will utterly consume any improvements in methodology or tools developed by researchers in the field. In two decades, some subset of the cool stuff we discuss here will be widely deployed and used--and terabytes of utterly crappy software will be written in Visual Coq++.NET by overworked burnouts in velvet sweatshops 'round the world, who will only wish that they could have the time to do the job properly. Assuming they care at all.

Maybe we were better off in the punchcard era, after all.

Have a nice day. :)

</rant>

This place for rant

I don't think anybody has a problem with plumbers per se except possibly that they do not go by the plumber title.

I don't agree that the Lumbergh's didn't decide university curricula; they at least influenced it.

They take a position that transcends programming ill-equipped. As lispers used to say, they know the immediate costs of performance, size, etc. but the value of almost nothing, except non-technical concepts like agile, pair programming, etc.

They're not willing to pay for quality because they can't recognize it, let alone take advantage of it. If PLT improves software quality it's not going to be because it lowered the cost of quality to affordable levels. I hope that it is because PLT made it possible for them to recognize quality, or realize that they don't know what is going on.

Actually, customers are willing to pay for quality.

(by quality, I mean more than reliability or security; but these aspects are a big part).

It's why Toyota can charge more than Chevrolet for a car with otherwise similar specs.

However, quality is often gauged (by customers) by brand, rather than by specific product attributes. Those companies with a reputation for producing quality in the past are generally assumed to be able to produce quality in the future. Conversely, those with a reputation for producing crap are assumed to produce crap in the future. Those without any reputation at all often find themselves in the "crap" bucket.

This isn't an unreasonable position for customers to take, of course.

However, it has the effect that it's insanely difficult to garner a reputation as a quality producer--your efforts are not likely to pay off for years. One good quality product won't significantly influence your rep, especially of the next product falters. It takes sustained effort to build up a quality reputation. And sustained investment and expense.

Many businesspersons find there is more money to be made (and less risk), in being a low-cost, low-price, low-quality producer. Far less capital is placed at risk. And in many industries, a really bad reputation can be shed by strategic liquidation (or even by simply renaming oneself).

Likewise, managers who do begin initiatives to improve quality, or efficiency, or some other metric--frequently have unrealistic schedules placed on them to show improvement. And when such efforts are funded, they are usually given a lower priority than projects which directly affect the top line of the organization.

Another problem I've observed (at least where I work)--is failure to account for the "little things". Major features are planned for, budgeted for, and architected for--and frequently have revenue assigned to them. (How one assigns a specific amount of money to a given feature is a black art of marketing that I fail to graps, but assume that doing so has some validity). But often times what makes the difference between something that the customer perceives as "high" or "low" quality are things that by themselves are too inconsequential to manage at this level. In some organizations, they are done anyway under management's noses; in others they are encourage if they have time. In others, engaging in such "gold-plating" will swiftly get you into trouble. But often times its the little bits of gold here and there that delight the customer.

Quality is expensive, folks. And we should be proud if and when we make it less so; I suppose I should apologize for the sheer crankiness and cynicism in my prior rant. :)

economics of software verification

In comment-39127, Scott Johnson wrote:

However, quality is often gauged (by customers) by brand, rather than by specific product attributes. Those companies with a reputation for producing quality in the past are generally assumed to be able to produce quality in the future. Conversely, those with a reputation for producing crap are assumed to produce crap in the future. Those without any reputation at all often find themselves in the "crap" bucket.

Gerard J. Holzmann gives a marginally less handwavy argument in support of this claim in his excellent paper Economics of Software Verification [PDF, 61KB] (DOI bookmark).

An interesting quote from that paper:

"A low residual defect density is more likely to be an indicator of a small user population than of high product quality".

What, if anything, does that say about the value judgments we make about programming languages? :)

That we should be using more

That we should be using more strongly typed functional languages to push that defect density even lower! ;-)

residual defects

In #comment-39313, Scott Johnson wrote:

An interesting quote from that paper:

"A low residual defect density is more likely to be an indicator of a small user population than of high product quality."

What, if anything, does that say about the value judgments we make about programming languages? :)

It tells us two things:

  1. There are two types of programming languages: the ones that people bitch about and the ones that no one uses (Stroustroup).
  2. The only way to find residual defects is by inductive rather than deductive means.

I absolutely approve of

I absolutely approve of students having to program their own data structures and components. With that said, it's ironic that the authors criticize the lack of formal methods training and the use of "gadgets" in the same article.

Verifying a program formally is an admission of software engineering failure. It means that the system doesn't divide up responsibilities and reuse existing, trusted components (such as the JDK components) in such a way that it is correct by inspection.

The authors' comment about the "beauty of programming" reminds me of the Linux zealot's claim that Linux is better than ____ because it's harder to use. I've had to maintain code written by people who evidently had this masochistic, primitivist mindset, and "beautiful" is not a word I'd use. The beauty of programming, rather, is that you can make code that solves a problem so generally that you never need to solve it (and prove it) again.

A formal protest

Verifying a program formally is an admission of software engineering failure. It means that the system doesn't divide up responsibilities and reuse existing, trusted components (such as the JDK components) in such a way that it is correct by inspection.

On the contrary, formal verification is a useful tool for engineering better software:
  1. Unless those trusted components are trivial (not the case with the JDK), then they should ideally be formally verified before they can truly be called "trusted".
  2. Combining "trusted" components is in itself design. As Aristotle famously said, "the whole is greater than the sum of its parts". which is to say that the way components are connected to form a system is itself a key part of the system design. Even when you're combining "trusted" components, the way in which those components is combined will be unique (otherwise why are you designing at all?). Non-trivial assemblages of such components would undoubtedly benefit from some kind of formal analysis.

If anything, I'd expect to see development methods that rely on assembling of components to make greater use of things like formal specifications: if you didn't write the code, and don't want to take the time to understand how the component actually operates internally, then your best bet for being able to assemble things properly is having a precise definition of the component interface. That's the idea behind Design-by-Contract. And it's taken a step further by SPARK-Ada (of which the authors of the article are proponents).

[Edit: I should probably note here that I'm not claiming that it's absolutely necessary to perform fully formal development from the ground up before formal methods can be useful. I think that a pragmatic application of formal analysis to key aspects of a design can be a helpful tool for understanding the design, and for ironing out conceptual bugs early in the design process. So-called "lightweight" formal methods sound very promising to me, and seem to more closely resemble the way in which mathematical analysis techniques tend to get used in other engineering disciplines.]

Correct by Inspection?

I am (honestly) curious to learn what that means / how it works. Sorry if this sounds like a bit of a rant.

I'm not sure I understand how anything which does something sufficiently interesting in Java can be small enough to be CbI. Especially with things like the really broken inheritance model in the JDK (UnsupportedOperationException). On the face of it, it sounds like CbI would even do away with things like unit tests. Things like unit tests and some formal methods are about making sure to cover your ass since you are human and can make mistakes. If CbI means to keep it so simple that it is obviously correct, then I think that is a rather unrealistic thing to hope for no matter what language, and in particular doesn't have much reality for Java. (I'm a Java dev.)

Also on /.

Also on /.

Slamming Java for the wrong reason

The authors seem to be claiming that you can't teach CS fundamentals in Java because Java is somehow inherently geared towards re-use rather than original thinking.

I don't get it.

What is it about the language that prevents teaching, say, algorithmic analysis?

Now, I'll certainly grant that there are many reasons to see Java as a poor first language: it's overly complicated, has a baroque way of dealing with parametric polymorphism, is half-ass OO, does functional abstraction awkwardly at best, etc. Certainly strong cases can be made for Scheme, Haskell, Smalltalk, and others as being more appropriate learning languages.

But to slam the Java language as the root cause of curricula teaching "plumbing" instead of CS or SE seems strange.

For what it's worth, my

For what it's worth, my analysis of Java versus Ada as vehicles of SE education is here.

For what it's worth, my

For what it's worth, my analysis of Java versus Ada as vehicles of SE education is here.

I was interested, but Springer wants $25 to download the pdf.

Email me and I'll send you a

Email me and I'll send you a copy.

Ada vs Java for teaching software engineering

Thanks, I enjoyed the paper. It's funny what you say about students reacting to Ada's Pascalish syntax. When I went to college we learned Pascal first, and then C, which was a bit jarring. I remember somebody in class asking why C was so ugly :)

What's the issue with Java's reference semantics breaking abstraction? Is it that equality by default is based on references?

Your paper illustrates nicely how hard it is to teach fundamental software engineering principles without getting tangled up in language implementation issues.

The issue is that you can

The main issue I referred to is that you can easily return a reference to a field inside your object (your abstraction), and since it is a reference and not a copy a client can then change your object and break your invariant/abstraction. What you usually want is to return a copy, unless the field is immutable. There are other issues with reference semantics, of course, equality checks being one of them.

Search for "exposing the rep", or see several discussions in Bloch's Effective Java for specific examples.

What you usually want is to

What you usually want is to return a copy, unless the field is immutable.

Indeed, Java probably exposes the wrong default here because it assumes mutation, and thus invariants are actually difficult to enforce as they require effort to avoid mutation or to reason soundly in its presence. Without having read the paper, I'm not sure how you distinguish Java "reference semantics" from the "reference semantics" of functional languages, but from my view the only difference is FP's tightly controlled mutation.

I didn't, since it was not

I didn't, since it was not relevant to my discussion. I compared Java with Ada - which has very strict value semantics (you have to explicitly try to take addresses, and even then there are many constraints on doing so). I highlighted the fact that the Ada approach (value semantics) is much better from a software engineering perspective, given that both languages are imperative, and that mutability is the default.

Booch's Ada vs. Liskov's Java: Two Approaches to Teaching Softwa

In comment-39141, Ehud wrote:

For what it's worth, my analysis of Java versus Ada as vehicles of SE education is [$25 at Springer].

Google Scholar says this paper is currently available here (among other places). (As EFF's John Gilmore would would say, The Net interprets [Springer] as damage and routes around it.)

Computer Science Education

Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods.

I rather expect this situation, or at least the perception of it, to continue or get worse. I seem to recall having read recently a number of articles advocating the de-emphasis of programming, for example, in order to attract more students (or minority students, or women, or...) to Computer Science. (No, I cannot find any specific references at the moment; I need to check back issues of CACM and Queue.)

Somehow I thought the

Somehow I thought the discussion here would be on the programming languages angle...

Most people don't understand the programming language they use

In my experience most people don't deeply understand the programming language that they use. For example I have been told that Fortran does not support generics because that would impose a runtime cost. Also, I have seen this as a way to determine if a number was negative in C: Convert number to string, and see if first character was a '-'. Some of the professors I had did not even really understand what they were doing . For example, I was asked by one why their code was not running, and it turned out they were trying to allocate a 2 GB structure (when the bytes of each of the elements was multiplied together) on a 128MB machine. I have done some stupid things in my own time like passing a megabyte array by value (Why is my code running so slow :).

education should be split into domains

Trying to put all eggs in basket is the problem with education. Education should be split in:

1) business programming: learning all those 'easy to use' languages geared towards commerce.

2) computer science: all the mathematics of computer science.

3) hardware-level programming: operating systems, C/C++, real-time systems, low-level programming.

4) software management: life cycles, requirements, testing etc.

Students should be able to mix and match the above. Each domain should have some mandatory courses where the basics are taught.

The problem is that people do not learn well what they are supposed to do. There is not enough specialization. For example, students do not learn enough Java to be considered useful in the business world.

The above implies the strong separation of software making and computer science. In other fields, this is reality: you don't see nurse being a doctor at the same time, for example.

Specialization is for

Specialization is for insects. A decent programmer should be able to learn a given language/task well enough to perform what he needs to do, without taking a mess of business Java classes in college. At a trade school, maybe, but a college is no place to teach specific things to "get you ready for the field". That can be (and is) job specific training once you get hired.

That said, college/uni

That said, college/uni compsci courses often don't cover issues like fault tolerance or distributed computing well enough and these are fundamental conceptual issues. Sometimes we're not teaching the right bits of theory.

People are more interested

People are more interested in having a job than in the latest advancements in PL theory. As a software manager, I'd rather have some very specialized Java people (or any other programming language that it is needed) who know their tools well, rather than having 'computer scientists' who are equally good in all programming languages. I would be happy because I would have a team of experts that can deliver products on time, and they will be happy because they can deliver the goods and be paid well. And then I would only have to train them in the business logic or my company's domain.

I find it rather telling

I find it rather telling that you use nurses and doctors as an analogy. Doctors aren't researchers!

Programmers are not

Programmers are not researchers as well.

On the contrary, some

On the contrary, some doctors are researchers, just as some programmers are researchers.

A very small minority of programmers are researchers.

The rest are common folks that happened to be good at it, or saw an opportunity for a steady good-paying job, or for any other reason except research.

The better programmers are

The better programmers are just about researchers if not in name. They have to do a lot of researcher to keep their skill set up and build innovative products. Sure, someone who is just in it for the money won't care, but I don't know many of those kind of programmers that stay in the field for very long.

Also, even general practitioners do some amount of experimentation just to see what works best on their patients. Some are even involved in research of some kind or another.

Experimenters are not researchers.

For example, I experiment with what works and what does not in the context of programming, because I want to find the best solution and tools for my job. But I am not doing any kind of research: I don't have a methodology, I am not performing controlled experiments, I am not formulating a theory, and I am not going to write a document explaining my thesis.

In short, research != experimentation.

I completely disagree:

I completely disagree: research is performed by experimentation and exploration and leads to innovation. Having a strong methodology or not depends if you are a left brain or right brain researcher: its widely accepted that a lot of innovation comes from right brained people that rely more on random exploration than a strict methodology. This applies to any field (ya, even physics!).

The verb "to research" applies to any activity that involves investigation via google, reading papers, etc... If you do a lot of research, then you deserve the title researcher, if not in name then in spirit.

There are differences between a formal researcher and someone who just does research. But the only relevant aspect in your list is publishing.

Open your eyes, maybe even try squinting a little

You are missing the wonderful point Sean McDirmid is making!

Research shows that problem domain experts are no better than average at things outside their problem domain. They accumulate their expertise not simply through experimentation, but through learning and benefiting from history. -- Standing tall on the shoulders of giants!

Think about the simple problem of task estimation. The tasks that you've got a strong grasp on are usually the tasks that you can best estimate. The tasks that you've never done before and are not educated in generally take the longest time period. Will David Mitchell says he uses what he calls Risk Factor Analysis to estimate task length. Simply put, it's a powers of 2 scale. If you think a task will take 4 powers to get done, then the cost of slipping on that task is 25-24. However, if you finish the task early, your time saved is likely only going to be 24-23.

You'll find most good task estimation schemes in some way follow the principles underlying this technique.

You'll also find that tasks with high powers are either (a) not decomposed into legitimate tasks and therefore you haven't thought out what it is you have to do (b) you will need to do research to complete the task. In the case of (b), you might choose to hire a consultant who is a problem domain expert in the problem that could become a project bottleneck. Even so, making this determination is not experimentation, but a form of research. You wouldn't hire 3 different consultants and be the Goldilocks of project management and find the consultant who is just right. You're expected to hire the right consultant!

Term dilution

While I understand where you coming from, I'm not sure it's useful to (effectively) dilute the term "researcher" as much as you have. By your argument, pretty much anyone working in a technical field who is serious about their job is a "researcher". Yet there are qualitative differences between working as a researcher in an academic institution or research lab, and working as practitioner. Sure, there is a certain amount of overlap between the two kinds of jobs. But they are different jobs, with fundamentally different goals.

Fundamentally different goals

ja, and isn't proof of this claim that there is a well-known gap between the two? and then notice the technology-transfer groups which universities or big companies staff up to try to bridge said gap somehow.

still, good to see some places (the Googles of the world) acting as the exceptions to prove such rules.

Complete your argument

Yet there are qualitative differences between working as a researcher in an academic institution or research lab, and working as practitioner. Sure, there is a certain amount of overlap between the two kinds of jobs. But they are different jobs, with fundamentally different goals.

s/working/means/

Just need to point out the logical flaw creating a hole in your argument, and unfortunately keeping your argument from completeness: You are conflating both means and ends! I hope you see how much substance your point is lacking. While it's true that the ends might be different, what argument do you have for the means needing to be different? Complete your argument.

It seems to me as if the authors of the article we are discussing are arguing that practitioners and researchers should use the same means. That would contradict your earlier comments. I don't want to term this discussion into philology, either. I'd say drop the criticism. We already have a certifications -- masters degrees and doctoral degrees -- that confer the meaning you are worried about preserving.

Perhaps I should have used Z

I'm afraid that you have misinterpreted my comment. I was not constructing an argument that different goals imply different means. My contention was that a "researcher" is different than a "practitioner" (and hence that making a terminological distinction is useful). In support of that contention, I observed that there are "...qualitative differences between working..." in the two kinds of jobs -- an empirical observation based on my own experiences in both academia and industry, although backed up by the observations of others (see here, here, here, here, or here for examples). I separately stated that the two kinds of work have different goals. The qualitative differences are probably attributable to the differences in goals, although I did not make such a claim. Nor are those qualitative differences necessarily differences in "means" (depending on what you mean by that word), so much as differences in the emphasis placed on the various "means" available, and the way in which those "means" are used (at least, that's been my experience -- again reinforced by the observations of others). For a more general discussion of the differences between research and practice, you might look at Walter Vincenti's "What Engineers Know, and How They Know It", which examines the differences between engineering research and scientific research, as well as between engineering research and engineering practice.

[Edit: A key point which Vincenti brings out, and which is probably worth bearing in mind in this discussion, is that in the course of their work researchers may do some development, and practitioners may do some research. The difference is one of focus: for the researcher, development of a product is the means to an end (new research results), while for the practitioner, research is the means to an end (development of a new product).]

It seems to me as if the authors of the article we are discussing are arguing that practitioners and researchers should use the same means. That would contradict your earlier comments.
Not really. There's a difference between developing (for example) formal methods (i.e research), and applying them (i.e. practice). In my opinion, Anthony Hall's articles What does Industry Need from Formal Specification Techniques? and Realising the Benefits of Formal Methods do an excellent job of discussing the problems involved in applying formal methods from the perspective of someone who has successfully applied formal methods in industrial practice, and outlines where more research might be needed to resolve some of these problems.

IMO, the only thing that

IMO, the only thing that separates programmers from researchers (just like doctors from researchers) is interest, and persistence.

Doctor vs. researcher

Doesn't it simply depend on the "mode" they are currently operating under, if you will? A Doctor is not a researcher when tapping on your knee to check your reflexes. The same person is not a Doctor when working in a wet lab with a tech on some very green-field pure-data research project. And yet they can be blurred when the Doctor is operating using some cutting-edge (ow) just-out-of-research surgical procedure.

Students want to do better, too.

I graduated college in May. My entire senior year was spent performing a curriculum assessment of the Computer Science department at my school. In June, my friend and I orally presented at the first ever department-wide curriculum assessment meeting. The meeting was held precisely because we rocked the boat.

Why did we rock the boat? Because other students were rocking the boat and complaining that things were too obtuse and not practical. Our courses were being watered down as a result of their boat rocking, because the faculty unintentionally addressed their complaints as though it were all a request to be treated like children, and ultimately we weren't getting proper instruction. When you pay thousands of dollars out of pocket for college, you should RAISE HELL over it. My friend and I read 10 years worth of ACM SIGCSE publications, picking apart the thinking process of those who actually care. We assessed the history of the Java Task Force. We questioned students about their motivations, and also their ability to recognize the repetition of basic concepts throughout the curriculum.

Here is what we found (shortened version, leaving stuff out):

  1. Students do not understand threshold concepts: indirection, recursive definition, transfer semantics
  2. There is a noticeable lack of differentiation between macroanalysis and microanalysis.
  3. Students do not mind frustration, so long as they are engaged in their studies.
  4. The department had no formal metrics for measuring student achievement and learning progress. All metrics were "from the gut" judgment. After this folly was made clear, the department organized the first ever curriculum assessment meeting, where we presented our results

One thing I should mention is that the curriculum assessment took a full academic year. When finished, the professors wondered why it took until we had graduated to point out everything we disliked about the curriculum. I think this is a touchy question, but my frank answer is that professors lose touch with what it is like to be a student, and students rarely get the chance in undergraduate school to teach something, so they are rarely in touch with what it is like to be a professor. Professors explain the same things hundreds of times to people who are learning it for the first time. Ultimately, it's a catch 22: students who do not excel are not well-equipped to criticize, so they don't, even though they would benefit the most from proper reform, and so they never get a proper curriculum.

It took my friend and I four years to put the pieces together and realize our education was being short-changed. Some people laugh at me when I tell them it took four years to figure out, as if I should be ashamed from escaping this catch 22. These are the most shameful people of all: they see but they don't understand. They're caught in a catch 22 and can't empathize with those escaping a catch 22. I tried finding psychological papers to describe this phenomenon, but I couldn't.

They say your future is determined by how well you handle the worst crisis of your life. It's not an opportunity handed to you, but a problem you have to face and solve. Edsger Dijkstra made a career out of it. Edsger Dijkstra would talk about software products that its creators could have no confidence in, about assuming programming cannot be held to a higher standard, describing it as the process of infantilization. How Dijkstra addressed this crisis is his legacy.

Students found it hard to write programs that did not have a graphic interface, had no feeling for the relationship between the source program and what the hardware would actually do, and (most damaging) did not understand the semantics of pointers at all, which made the use of C in systems programming very challenging.

This statement is WAY too subtle!

  1. If students have a hard time with something, like textual interfaces, then put extra effort into teaching it. Sacrifice depth for breadth of applications of textual interfaces, and demonstrate that mainstream programming languages are textual interfaces!
  2. It's not simply pointer semantics that students aren't being taught enough of. Students do not understand indirection, in general.

The result is a student who knows how to put a simple program together, but does not know how to program.

Even trivial programs can have pretty complicated architectures that are non-obvious, so it's hard to measure "knows how to program" separate from "knows how to put a simple program together". Let's compare the following two implementations of UNIX command line utility /bin/true:

  • http://opengrok.creo.hu/dragonfly/xref/src/usr.bin/true/true.c
  • http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/true.c

It seems to me that the authors are understating some crucial concepts, but for the sake of brevity, I won't go into detail here.

In addition, I am disappointed that Robert Dewar and Edmond Schonberg did not mention Bertrand Meyer's extremely hard work and devotion toward solving these problems! Who here has read his upcoming book, A Touch of Class, and said, "this is what we need!" Also, credit Microsoft for helping fund Meyer's passion. Talk is talk, but Bertrand walks the walk. He's experimented with this problem domain, and is in my eyes a world expert on teaching formal methods to students... something he does not get any credit for, so sadly.

Please try to keep the

Please try to keep the discussion focused on programming language issues. This thread is beginning to stray too far afield.

Follow-up article: "Who Killed the Software Engineer?"

A follow-up to the original article that started this thread has now been published. Who Killed the Software Engineer? (Hint: It Happened in College) is a semi-interview with Robert Dewar, written as a result of the reaction that Dewar's earlier article received. It's largely a general discussion of what's wrong with modern CS education in the US, rather than something that dwells on language issues in any depth. However, Dewar does clarify that his problem isn't so much Java as a first language, as it is with how Java is used in teaching.

But without naming names

The interesting thing about Dewar's claims is that there is factual evidence that supports it. I mentioned in my post above that the Java Task Force thought process is extremely well-documented, and, when my friend and I were performing the curriculum assessment, it provided a huge case study supporting our argument.

College administrators are understandably alarmed by smaller student head counts. “Universities tend to be in the raw numbers mode,” Dewar says. “‘Oh my God, the number of computer science majors has dropped by a factor of two, how are we going to reverse that?’”

There are too many academic papers on Computer Science education that mention enrollment as the primary motivation for "revamping the curriculum". Yet, these papers do not provide many metrics for curriculum assessment. For what it's worth, my friend and I had a very difficult time creating a curriculum assessment from scratch. What we ultimately realized was that the department already did curriculum evaluation, but not assessment. We chose to mimic the National Security Agency's INFOSEC Assessment Methodology.

The lack of metrics is a quality control issue, and college administrators are partly at fault. No college administrator running a tight ship should allow a department to drop enrollment by a factor of two and not present statistical figures showing potentially startling correlations. If more departments were run with the quality control ideas of quality expert William Deming, then we'd probably get more academic papers with interesting insight into student success and failure factors.

If what the industry wants are technicians?

Then why do they insist on a college degree?

A few questions to consider:

1) What percentage of jobs in the software field (coder to spec, tester, domain expert) can be reasonably accomplished by people with a technical education. a two-year degree, or "education by practical experience"--as opposed to those positions needing fore formal education consisting of at least a four-year degree? In many other engineering disciplines, there are numerous roles for sub-baccalaurate workers.

2) Many companies I know of (including where I work) prefer to higher MS graduates into positions where neither advanced research nor specific expertese is needed--there seems to be an attitude among such companies that the corps of BS graduates is largely crappy, and that the cream of the crop is to be found coming out of grad schools. How would this be affected by #1?

3) (a bit US specific, my apologies): Many "Johnny can't read" or "Johnny can't write code" studies that get published in the US document a declining level of achievement in US educational institutions, while assuming improvement (or at least a corresponding lack of decline) elsewhere; yet little research is done on the practice of institutions abroad. It would be interesting to see a comparism of CS departments at American universities (across the spectrum--ranging from places like MIT to places like University of [State]-[Small town where branch campus is located]), to comparable instruction in places like Europe, India, China, or Japan--considering factors such as institutional funding, regional academic traditions, admission requirements, and such. Has anyone published such a study?

In short--if what the industry wants are techs rather than engineers or scientists--maybe things should be arranged so that they can get them. My only concern is that industry would (foolishly) then employ such techs in positions where a higher level of expertese is called for.

Thoughts?

3) [...] It would be

3) [...] It would be interesting to see a comparism of CS departments at American universities (across the spectrum--ranging from places like MIT to places like University of [State]-[Small town where branch campus is located]), to comparable instruction in places like Europe, India, China, or Japan--considering factors such as institutional funding, regional academic traditions, admission requirements, and such. Has anyone published such a study?

You are absolutely right that this is important. However, without metrics in place, how do you intend to do a comparison? There are a lot of questions Computer Science department faculty involved in curriculum design are simply not prepared to answer (e.g., something that needs to be measured in the US but frequently isn't: students who took and passed the AP CS exam and do poorly in college CS classes vs. students in general; the opposite analysis is equally important). At best, you can talk to the students in these programs.

I spoke online with students who claimed to be from the University of Mexico, and IIIT-Hyperabad in India. For the sake of credibility, I didn't bring this up to faculty members during an oral presentation and it wasn't included in any written report, either. Documentation is crucial.

The only non-US program that has caught my attention is the one where Bertrand Meyer teaches. I can't emphasize enough that the course he has designed around Erlang seems like an excellent approach.

1)

I think the statistic widely cited is that ~40% of professionals have a degree in Computer Science or related field.

Also, the statement that "there are numerous roles for sub-baccalaureate workers" can be recapitulated in terms of who owns the means of production.

2) [...] How would this be affected by #1?

There is absolutely nothing wrong with rejecting potentially under-qualified applicants. It is fundamentally different in the selection process from the eugenics practiced in many commission-based agencies: firing the bottom (least productive) agents each performance evaluation cycle. The flaw with the practice of eugenics is best shown by a study on chicken breeding (See: Muir, W.M., and D.L. Liggett, 1995. Group selection for adaptation to multiple-hen cages: selection program and responses. Poultry Sci. 74: s1:101). This paper is amazing to those of us rabid about statistics. In a nutshell, this paper shows that individual selection of chickens from a cage is inferior to group selection of chickens; it also shows how easy it is to conflate two models, and how far removed a resource specialist like a breeder or even HR Manager can be from the consequences of their actions.

Note that Google often rejects seemingly qualified candidates, even from the cream of the crop. This is vastly different from the modeling errors demonstrated in the Poultry Science paper.

My only concern is that industry would (foolishly) then employ such techs in positions where a higher level of expertese is called for.

This is no different from hiring a Harvard MBA with no time management skills. s/expertise/productivity/ Someone who accidentally doubles their time commitment over and over again will get half the work done of a Pace MBA. In the real world, subordinates hate it when they are held accountable for productivity while their manager's have a policy proposal sitting on their desk for a week still unread. Keeping with the chicken selection process, it seems to me like the best hiring practice is to see if the candidate can actually solve the problem. Not long ago, Miguel de Icaza wrote about how he hired developers for Mono development, and how the interview consisted of a take-home question that had to be submitted within one week.