Software complexity as means of professional advancement

According to a Management Insights study software designers have incentives to pursue more complex solutions than necessary.

I guess that the language and language platform is part of that complexity... I did not read the actual paper (not a free download).

Ruediger Hanke provided this link which details better the scope of the study. Notably:

Drawing upon existing theory, we subject participants in our experiments to a context in which they act as agents for a principal, and have to convince this principal of their capability in order reap financial incentives. [...]Theory suggests that the difficulty of the task influences their expected gain in reputation. The data from the experiment supports that people have a higher propensity to choose a difficult solution as the expected reputation they can gain by choosing this solution increases.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


I can't read the paper either, and am unwilling to fork over money to do so. :) Unfortunately, I suspect this is yet another one of those "don't trust programmers" broadsides that seeks to paint the profession as thoroughly unprofessional; and informs managers that deadlines for programmers ought to be tightened (and purchase requests for better tooling and unfrastructure should be denied), because, after all, we're all just a bunch of treasure-hunting, sandbagging frauds who can't be trusted. (This meme also seems to be a common one in the interaction design/usability engineering community--many published works there advance the notion that user interfaces are poor because programmers are lazy or incompetent; two notable examples in the literature are Platt's Why Software Sucks and Crawford's The Art of Interaction Design).

And, I suspect, that the paper doesn't really say anything new. The abstract linked to above uses the quantifier "many", an adjective which suggests a significant subset but could mean "more than one". (The paper may be more specific; and abstracts for papers appearing in paid journals will often exclude any of the real meat; so that you have to pay to read the paper to gain any real understanding).
Certainly there are software professionals who act in ways which may be contrary to the interests of the client; but this is true of *all* professions. (And in some professions, being a "professional" implies a responsibility to society at large, which may be contrary to the interests of the immediate client. Accountants, lawyers, and civil engineers, for instance, all have expectations that they won't assist a client in malfeasance or fraud).

But let me ask the group here. How often have you, or your peers, done things like selecting inappropriate technologies for a given project because they look good on the resume? Do you see bad programmers preferring to do things that are difficult to hide the fact that they are bad? (And do they succeed at it)?

I've occasionally seen programmers looking for solutions that were "cool"; I've also seen programmers designing solutions that were more abstract than necessary because "we might need it someday". (I've been guilty of the latter earlier in my career, experience has taught me to have a higher threshold of when to "go abstract"). I have yet to meet the designer who selects an unusual design or technology solely in order to augment his or her resume (most of the career-focused programmers I know want to get the job over with as quickly as possible). And the bad programmers I know generally shy away from learning anything new; it's the too-clever-by-far sort that you have to worry about.

On the other hand, I've been on numerous projects that suffered significant delays and overruns where key technology decisions were made by management fiat--specifically, where managers required the use of a new technology because of glowing efficiency claims concerning the technology in the trade press (claims which were often little more than advertising masquerading as literature), or a desire to jump on the bandwagon of the monopolist du jour. In these cases, the designers wished to stick with the tried and true, but were overruled by managers who ultimately shot themselves in the foot with a silver bullet. (A phrase that really ought to be introduced into the lexicon!).

Not exactly with programmers...

But I have seen teams in IT "architect" solutions with far more complexity than warranted. It's usually done in the name of making a "robust" solution. Application vendors also seem to pander to the desire to put a complex solution in place. I think some of it certainly stems from a desire to have worked on something complex as a career builder, although it's also just good-old-fashioned over-engineering.


But I've also seen teams select an architecture which is attractively simple, but does not scale to production loads. Given that the current level of software engineering doesn't yet admit accurate prediction of design schedules (at least compared to other engineering disciplines), a combination of a) experienced architects who understand the scalibility requirements and how to architect an appropriate solution, b) domain experts who can generate and vet requirements, and translate them into software space, c) reuse of existing architectures and components wherever possible, and d) use of tools which are mature and admit higher productivity. PLT only affects d, of course, and the programming language is one of only several tools to be considered.

Many times, projects go wrong despite "best effort" performance from a sufficiently-qualified team. Sometimes, it's because of unrealistic expectations (scope, schedule, or budget) from stakeholders. Sometimes it's because of optimistic predictions from the designers.

And sometimes, the shit just plain hits the fan.

I suspect that the paper doesn't say anything

You may well be right on this. Thank you for asking the right questions.

On the other hand the auther seems to have a solid academic background.

Well, I'd hope...

... that one's management was willing to try different approaches in order to improve things. Granted, the notions of what to try should probably be driven by technologically savvy people in the first place, but the selection of technology is both a technical and business decision and needs input from both sides. Recently, though, I too see more "techno-pressure" come from external vendors then from internal corporate sources. This worries me. Employees are not taking the time to familiarize themselves with new technology or they are too satisfied with current technology. Granted, most companies do not help with this, keeping people "fully committed" to the work at hand so that any new technology must be absorbed on the employee's own time. But I also think people's curiosity doesn't seem to be what it used to be (but that's probably because new stuff just isn't that exciting - when you start with XML, what's to love?).

Anyhow, to bring this back on-topic for the group - given that you can write bad code in any language, other than conciseness, modularity, separation of concerns, and general hygiene issues, is there anything that a language could do to help prevent overly-complex designs?

I'd suggest the Java approach... :)

and be intentionally minimalist and verbose such that any attempt by programmers to be clever and cute is thwarted by the sheer amount of typing that must be done (Java is the opposite of Lisp in that it makes the trivial painful and the difficult impossible), but that might be misunderstood as flamebait. :) There is merit in a language which is, to use a term that is often seen as inflammatory, "idiot-proof", but Java isn't really a good example of such, especially in its approach to concurrent programming.

But I suspect that wasn't the answer you want.

But there are a few good lessons to be learned from Java and its brethren:

* Provide a complete set of quality libraries, and ample documentation. No language, no matter how clever and concise its feature set, is production-ready if a programmer has to figure out himself how to implement things like simple collections, networking, or whatnot. (And documentation is a key deliverable here; a networking API that only a few people can understand is likely to be reimplemented poorly). The more good wheels are provided, the less there is temptation to re-invent them.

* Enable the domain expert. The ability to make good, safe abstractions is key. Many languages include facilities for abstractions which are brittle and difficult to debug. Much poor design is introduced when translating domain requirements to software requirements.

* Be multi-paradigm--or barring that, understand your niche. I have yet to encounter a PL paradigm which is well-suited to all problem domains. PL experts have a longstanding love of functional programming, partly because it's well-suited to our problem domain--but writing PLs is not the problem that most programmers are trying to solve. Some domains are met well by "niche" languages or domains.

* Be extensible--and suitable for refactoring and maintenance. In particular, the ability to migrate from a concrete design to an abstract one. Some times, complexity is driven by the need to "get it right the first time". In the example I mentioned in my original post, many programmers make designs more abstract then they need to be because "we might need it some day". Of course, many of them have been bitten trying to re-work an initial design that wasn't abstract in order to cover a new unexpected case, and find that the language makes the rework difficult--hence the desire to build abstraction in in the first place.

And of course, the question is begged: How do we know when a design is "overly complex", as opposed to merely complex (due to a complex set of requirements). The mere existence of a simpler (by some metric) solution that meets the requirements? The existence of certain "red flags"?

See Java

Java programmers get paid more than most other programmers. Is this because the work they do is fundamentally more difficult, or because the Java platform is needlessly complex? Ruby on Rails was motivated by the premise that the later is true, and this is in line with my experience.


I suspect it's because the Java programmers you cite are writing Big Boring Business Software(TM) that is solving particular applications' needs, stuffing their heads full of legislative constraints, day-treatment regimes for bond valuation or other sets of nasty, picayune, boring yet critical business details rather than getting to play with technical problems and their correspondingly more tractable definitions of "better". Most everyone loves iTunes, but relatively few CEOs are going to shell out from their budgets for that or Compiz: they need new payroll and logistics solutions, and are paying for them. COBOL is out; Java is in, and the salaries are reflecting this.

Just my $0.02 ....

You can try...

but somehow bondage-and-discipline languages (Pascal being the best-known example) tend to lose ground to worse-is-better languages.


This should have been a reply to fadrian.

No proof

You imply that bondage-and-discipline languages have lost to worse-is-better languages because of these caracteristics, but this is far from being obvious:
- Pascal lost to C because there was 'one C' and many incompatible extended Pascal, the 'standard Pascal' being a toy (didn't even include dynamic memory allocation of array of variable length if memory serves) and because Unix was a success.

- Ada lost because the compilers&tools were much more expensive than the C++ equivalents, and because C++ was seen as C's successor.

Java is more 'bondage-and-discipline' than C++, yet it has succeeded because C++ has turned out to be not so good..


A further hypothesis would be that B&D languages "lose" to worse-is-better language because of the assumption that B&D languages lose to worse-is-better languages.

I am pretty sue this is at least part of the reason. This is also why we should be careful when expressing this view as proven fact.

Of course, "B&D Language" and "worse is better" language...

are both Potteresque terms that need more precise definition in any substantive debate. (Here, "Potteresque" refers to US Supreme Court justice Potter Stewart, who famously wrote "I know it when I see it" in a 1964 court ruling on obscenity).

Given that the archetypes for both language families--Pascal and C--are both rather long in the tooth, it would be interesting to see how more modern programming languages are categorized in these bins, if at all.

Naturally. Which is a


Which is a further argument in favor of my hypothesis that the distinction creates reality, not merely reflects it.

Agreed. Twice.

I also believe that your point holds at another level.

Java resembles C (curly brackets, "statement" orientation, etc.) at least in part because of the belief that such resemblance would be necessary to its acceptance by programmers with a C/C++ background. So the "worse-is-better wins" belief has even shaped the newer alternatives.

A generation of language designers has been taught to ask, "Do you want curly brackets with that?" ;-)

The Java syntax is not much

The Java syntax is not much different from C syntax.

Considering the goals of the Java language do you think it did not make sense for Java language designers to rely on the programmers familiarity with the C syntax? (see notably the third page)

I believe that "pragmatic" is not "worse", it is better to go with a non-optimal solution if in the end it makes it easier for newcomers to understand the code.

That is different from the premises of the original post which have more to do with obfuscation than clarity and maintainability.

not quite

Vis-a-vis Java vs. C++, I think you get caught on your own argument: Java had the SmallTalk zealots pushing hard for it (and rightly so), but the reason it won was the exact same reason that you cite for Pascal losing. I don't know if C++ compilers have all caught up to the spec or not, but code that should be "portable" between compilers even on the same Machine/OS often aren't (weren't?), whereas Sun provided a reference Java implementation on every platform that business's cared about. Java written on Windows ran on Solaris, AIX, Windows, etc., without the "ok, now which implementation of Java supports these features?" nightmare. Then add a set of libraries that allows the programmer to focus on the business application instead of re-inventing red-black trees....

That kind of practical concern completely trumps the specific characteristics of the language where business's adoption of the language is concerned. When a "better" language becomes not only as practical, but more practical, then it will be adopted, but not until. Until then, momentum will favor the COBOL du jour.

I think portability was

I think portability was definitely a huge incentive. Which raises the question: would any language with that degree of portability have succeeded in place of Java? For instance, an ML or Smalltalk variant, with a free compiler and runtime. Was static typing necessary? Was dynamic code loading? Or was the marketing the major driver?

A sweet spot

IMO Java hit a sweet spot between static typing and dynamic code loading. Neither ML or Smalltalk would fit the bill. Many programmers claimed for static typing as a performance solution, only recently we started to get out of the myth that static typing is the only way to achieve good performance, so this rules out the Smalltalk of the day (except for Strongtalk who would fit the bill nicely but had the bad luck of coming out a couple of years after the train started moving). OTOH Java was initially marketed as an embeddable solution, being usable in Applets, devices and such, the marketing hype was focused on dynamic code loading of applets as the way of the future and ML didn't have any of those (or a clean way to do it. Also ML is an academic language, favoring clean and well understood solutions to problems, instead of half-baked hacks with plenty of rough edges. It's hard to see a ML with some for of dynamic loading that resembles classloaders as they initially were specified, it took the ML community more years than industry wanted to came up with the clean solutions, like Alice ML.

If we ignore the C-like syntax issue (which I think is important but not a critical factor) there were a couple of languages (IMHO) that could have succeeded instead of Java: Dylan and Objective C. Both would require some cleaning (e.g. Objective C would have to lose C source compatibility) but they have a good mix of dynamic and static features similar to Java.

It's hard to see a ML with

It's hard to see a ML with some for of dynamic loading that resembles classloaders as they initially were specified

Dynamic type checking can always replace a statically type safe solution, when the latter isn't available. I think ML with runtime types ala Java would only need what OCaml already provides: a Marshal module. But one that does proper type checking instead of segfaulting like OCaml's. :-)

It's not an ideal solution, but it would have provided most of what Java brought to the table in this department. Heck, SML/NJ already provides this sort of pickling for compiling programs IIRC, and has for some time. They just needed to pickle to a bytecode instead of native code.

Timing is always the problem

Did SML/NJ and OCaml had this kind of program pickling back then? IIRC they came later. Also not only pickling, but runtime reflection is also a selling point in Java: it's very easy to fall back to reflection when the type system "gets in your way", such pervasive reflection is against the spirit of ML, but was essential in the creation of tools and libraries in Java.

Marshaling may have come

Marshaling may have come later (though not for SML/NJ I don't think). My point was simply that such facilities are relatively easy to add compared to building a full language and its libraries in the first place. Someone just needed to start with SML and add runtime types! :-)

As for reflection, I never found myself using it due to the type system, but only due to the verbosity of Java/C#. O/R mapping is the poster-child for reflection, as is serialization. I can't imagine a DSL via which one could express such relations and have them persisted would be any more verbose than O/R mapping files, and you'd be able to stick with the same language instead of escaping out to XML.

Using reflection for "duck typing" to get around the type system seems to be a more recent fad.

Runtime types

My point was simply that such facilities are relatively easy to add compared to building a full language and its libraries in the first place. Someone just needed to start with SML and add runtime types! :-)

Well, runtime types for a language like ML with its rich, fine-grained structural, polymorphic and higher-order type system are much more difficult to do (and to do efficiently) than for a simple nominal type system like in Java. There also is much more potential for breaking some good properties. Keep in mind that Java's type system basically provides no static guarantees at all as soon as you really start loading stuff dynamically. That would be unacceptable for ML.

(Of course, I still agree that ML should provide it. :-) )

Well, runtime types for a

Well, runtime types for a language like ML with its rich, fine-grained structural, polymorphic and higher-order type system are much more difficult to do (and to do efficiently) than for a simple nominal type system like in Java.

You're probably right, and full runtime types may not even be necessary to get us most of the way there. Your work with Alice ML is identifying a minimal number of runtime checks needed, but even some more conservative estimate probably would have sufficed. I recall reading about such libraries being built for OCaml for instance.

There also is much more potential for breaking some good properties.

Which I think is a great disincentive for using runtime types unless they're absolutely needed. :-)

Keep in mind that Java's type system basically provides no static guarantees at all as soon as you really start loading stuff dynamically.

There are some guarantees are there not? Loading a class that declares it implements a certain interface but in reality it does not, would not verify.

Not quite

There are some guarantees are there not? Loading a class that declares it implements a certain interface but in reality it does not, would not verify.

Not quite. At runtime, classes and interfaces are represented only by their names. The implements check really only consists of checking that the class was declared to implement an interface of the same name (and that such an interface exists). There is no structural signature check. There is no guarantee that an interface or class A at runtime resembles the A you compiled against in any way! The existence and types of individual methods or fields will not be checked until you actually try to access them. Hence, every single object access potentially leads to a NoSuchMethodError or the like. In other words, Java effectively is a "dynamically typed" language (or a hybrid, if you want).

That's just crazy. I'm more

That's just crazy. I'm more familiar with .NET on the CLR, and I'm pretty sure it verifies all bytecode while loading the assembly, as I've run into verification errors of this sort. Though perhaps I'm confused and they were versioning errors.


I was talking about reflection as a poor man's SYB. Also the JavaBean hype required quite a lot of tooling to create introspectors and such, all built upon reflection. Much of the enterprise Java stuff uses reflection, to add interceptors, decorators, etc., which would require a much better type theory than we had available.

Much of the enterprise Java

Much of the enterprise Java stuff uses reflection, to add interceptors, decorators, etc., which would require a much better type theory than we had available.

Would it? To do it all within the language, I agree. But there are many instances where primitives, such as a proxy, could just be provided by the runtime by a primitive "wrap" function. The runtime would need to be aware of this usage, and no doubt this would slow down the implementatino when used, but I think it would be possible.

Past vs. Present vs. Future

I'm not disagreeing about the state of type theory today: AFAICS we can have a Java-like (i.e. OO, nominal subtyping) language typed to a satisfactory degree. Actually I think this language is hiding itself inside Haskell these days (its type system hides many interesting things). But we didn't have this knowledge when Java came up and nobody would butcher ML with a unsafe reflective system (which is my main point). Today I'm more than happy with polytypic programming, first class labels and gradual typing. If we combine these three and add a healthy dose of dependent types we could have a type safe reflective system a la Java. There's still much work to do, to cover everything described in types and reflection, but IMO it's a matter of a decade at most (you can see how confident I am ;).

But we didn't have this

But we didn't have this knowledge when Java came up and nobody would butcher ML with a unsafe reflective system (which is my main point).

Right, and my point was simply that we'd probably have been better off with a butchered ML instead of Java. :-)

It would've been worse

Today people say "Java sucks therefore any kind of static type system sucks, let's go dynamic forever baby" and we can point to Haskell and ML to show that real static type systems don't suck. If it was a butchered ML people would say "AntiML sucks therefore any kind of static type system or functional programming sucks, let's go dynamic and imperative forever baby!". I know which of those I prefer to hear ;)

Not sure that would hold any

Not sure that would hold any water, since ML programs that didn't make use of these dangerous features would be perfectly safe. So people would be saying "reflection sucks!", not "ML sucks!". Then there would have been more of a push for safe reflection, which I heartily agree with. :-)

I suspect so

Bear in mind the huge amount of Visual Basic and other proprietary "4GL"s that were out there at the time. My personal belief is that Java's on top now pretty much because it was the first out of the gate that made the requirements. The consultants I worked with all loved Smalltalk, and a number of us were very excited about Objective C, but licensing and platform issues limited our ability to adopt either. Our client, MCI, wasn't shy about using whatever languages would be effective, but when a Java applet came along that was written on Solaris but worked perfectly on our SlackWare and Windows boxes, that was all she wrote. Compared to the whole "PowerBuilder means Windows, but CSet++ means Warp or AIX, but ..." issues, their management was thrilled to avoid each new project's language choice automatically having to have potential hardware ramifications. I'm not an industry analyst, so I can't speak to the broad trends, but from where I was standing Java became an instant no-brainer not because of ANY of its linguistic characteristics (these guys were happy to have us code in Microsoft Access(!) if it got the job done), but strictly because of its portability. The relatively high productivity of its libraries sweetened the pot, but portability outweighed everything.

From a management perspective, "write once, crash anywhere" was a much better deal than the alternatives. :0)

I don't recall what state

I don't recall what state Python or Perl were in at the time, but I imagine the fact that Java was compiled, or at least partially compiled, made it preferable to scripting languages as well. Since you were working in the industry at the time, were any scripting languages considered? At least Tcl must have had a somewhat usable implementation, for instance.

not really

Bear in mind that my experience was mostly business apps, not other uses. No-one had heard of Python yet, and scripting languages, especially Perl and Tcl were popular with folks hacking things together on Unix machines, but I didn't personally see any penetration with them. I suspect that was largely a factor of managers being unaware of them unless they came from a technical background, which precluded their being in charge of the kinds of business units they were running. The bias was towards things that could run on Windows since the hardware was so much cheaper (You could recycle your older Windows boxes to run NetWare when the big 66MHz 486's came out) and because showing it to managers didn't mean taking a field trip to the "technical offices". This actually wasn't as unfair as it might sound because, culturally speaking, the more technical folks there turned up their noses at the kinds of mere "business" problems (like auditing or billing issue resolution tracking) that we were being paid to do; just getting the time of day from the technical side of the place wasn't easy.

This wasn't that long ago! I'm younger than most of y'all but suddenly I feel like an old man.... :o)

Well, I'm asking only

Well, I'm asking only because I was still in high school at the time, and didn't even own a computer, so I'm curious. ;-)


As if it was news that the individuals interests do not necessarily align with companies interest!

This isn't specific to software designers either: never witnessed severals division fighting within a company?
It ain't pretty..

You mean to tell me...

that a company's executives might be "managing earnings" in order to best meet individual performance expectations, and/or expectations of the financial markets, even if counterproductive to long-term financial performance?

That a company's salespeople might delay booking a large order until the following month, if they've already met their quota for the month?

That any employee who travels on business might eat at nicer restaurants while on the company's dime, then they would normally eat at were they to be paying the tab?

I'm shocked! (Of course, the truly shocking thing is that we nowadays accept a corporate ethic that considers shareholders and creditors to be the only parties with a legitimate interest in a corporation's performance and/or behavior; but enough lefty ranting for today).

Yes, LtU is quickly becoming

Yes, LtU is quickly becoming a forum for discussing politics...

But I can't resist one further comment on this: Both the PL market and PL innovation discussed in this thread is dominated by the US. How are things in different places, where the status of "the market forces are always right" is not of a fact of nature?

By the way, notice that quite a lot of fundamental PL innovations indeed come from places outside the US. This is, of course, a further issue yet.

I'm not sure how national issues matter...

...other than the fact that virtually all major production languages, when they use keywords, use keywords derived from English. Of course, the lingua franca of multi-lingual Internet communities, and academic communities (online or otherwise) in many fields of study, also seems to be English.

This includes postings to LtU, a forum which includes many participants from outside the US. (A question for Ehud: Do you ever publish in Hebrew? If not, why not?)

And I'm not sure whether or not "market forces" are right or wrong, anyway, or how that observation is locale-dependent. Market forces--an observable economic phenomenon, are somewhat like gravity; neither right nor wrong, just there. Suggesting that governments ought not interfere with "the market" is like suggesting that we shouldn't build airplanes or other machines which permit heavier-than-air flight. On the other hand, suggestions that market forces can be dispenesed with at a whim are equally foolish. Unfortunately economic policy is one realm where descriptive analyses and prescriptive policymaking are often confused.

I suspect that if we could somehow control for the current position of the US in the global software industry, we might find that national issues don't matter much. Dijkstra wrote many entertaining rants on the subject, such as this interview, this paper (warning: offensive language, especially to those in the US, and extra especially to African-Americans), and of course, this paper (see also this thread on LTU), but I'm not sure which--if any--of his observations hold water. Probably a bigger effect is generated by the open source movement--a truly global phenomenon--and the resulting "democratization" of programming. (Now that would be a good thesis topic for someone...)

Social science

Scott, let me first make it clear that I did not mean to imply anything bad about the US or any other country, merely acknowledged that there are cultural differences. You are, of course, correct, to point out that the software world is global. However, there are more dominant players (and markets) and less dominant players. Importantly, the success of innovations is not solely the result of the "quality" of the innovation, the place and timing are, of course, also important. If one wants to really study how innovations are made and how their use spreads, these factors need to be considered (even if , after careful study, they are dismissed).

As specific examples, related to the current thread, consider the following. If a language-related innovation does not become popular (and suppose there is money to finance further work, never mind for the moment from where), should research continue or be reassessed? In some cultures lack of popularity is an important factors. Other cultures in fact think (or at least thought) that being popular is a sign of being low-brow and not being popular is a positive indicator (and more resources should be spent on the innovation). So it does matter if the innovation we are considering is the product of place with the former type of culture or the latter.

As a second example, consider the generalization which began this discussion. Assuming that the generalization holds (and B&D languages are less popular) what does it tell us? It can mean many things, of course. But is the first conjecture going to be that it tells us something about languages, about programmers, about the software industry or about something else? Since in depth research is not usually involved the gut reaction is going to determine the type of conclusion we draw from the generalization. As I said, I think different types of "gut conclusions" will be more widespread in different cultures.

Once again, lest there be a misunderstanding: I am not arguing which conclusion is right or better, nor am I suggesting that the specific conjectures above are correct, merely the cultural factors are involved.

I didn't think that

you were slandering any particular place. You seem to be far too thoughtful for that, which all of us appreciate.

Cultural influences no doubt affect what research gets financed and what doesn't, and the "success" of a research program, however defined, will certainly influence that outcome. A lot of it, however, is simple politics--who knows whom, the reputation of the research and his/her institution, and such.

I'm not sure I've ever encountered an academic culture where success makes something passe. I see such attitudes all the time in the arts (where certain critics will dismiss any commercially successful work as "crap", as a means of pretending to elite status), but that strikes me as inherently unscientific. But my travels might be insufficiently extensive.

(Leading edge PLT research is seldom "popular, I might add, if popular means "widely deployed outside academia". Generally, it is better engineering practice to select technologies which have been thoroughly vetted by research, rather than something which is still under active study and has many unsolved issues. (That doesn't prevent industry from adapting technologies that are immature--and in many cases which are "hacks" rather than the products of formal study; but that's not the fault of the PLT community).

Regarding whether or not "B&D" languages are popular. Pascal was for a long time the primary teaching language, and had some production usage on Macs and PCs in the late 80s and early 90s; but it never garnered much popularity and was soon displaced by C/C++ and later Java. I suspect that the quality and incompatibility of the numerous implementatons had as much to do with it, as any B&D-ness. Java is widely regarded as B&D, yet it still enjoys a great deal of production use. Other than the poorly-behaved multithreading (and a failure to enforce any sane type of synchronization), it is a reasonable language for novices. And I think that you are right--it does tell us a lot about certain segments of the programming community; some of whom are intensely proud of their (perceived) intellect, and don't like being yoked to languages which are perceived as overly picky.

(An interesting metric for B&D-ness, though hard to measure, is the syntactic distance between well-formed programs with differing behaviors. Most B&D languages are designed so that the distance is high; whereas with the "sloppy" languages the distance is often a single token. A second metric for B&D-ness might be the ability to write programs which are well-formed, but either inconsistent or invoking undefined behavior; though there are plenty of languages which are viewed as "expressive" but for which every syntactically-valid program is well-defined).

Hanlon's Razor

Simple solutions are sometimes harder to devise than complex solutions, because simple solutions tend to be more general. DSLs are "simple", but devising one can be very complicated. In every convoluted tower of if statements and twisty maze of dispatches, some small, elegant DSL is no doubt screaming to get out. :-)

So I think Hanlon's Razor applies here, which isn't to say that programmers who can't see the implicit DSL are stupid, just that they're intentions are not malicious. Being embroiled in all the nitty gritty details, which we're sometimes figuring out on the fly, we can't see the forest for the trees. Designing simple solutions is just hard!

wow! appropriate snarkiness!

It's a little more complicated, imo.

In part, it's almost the opposite -- people oversimplify for careerism -- but the outcome is the same: too much needless complexity.

Again and again and again some more the industry gets into downward spirals of exploiting the "easy happenstance". For example, as the "Web 2.0 bubble" was inflating in the Bay Area, you couldn't throw a rock without hitting someone working on a start-up that was yet another LAMP-stack hack, most often some niche-oriented "social networking" play. Only a tiny percentage get funded, of course, but the theme here is "what can be done with 2 guys and 10 months and some marketing saavy." So the emphasis is on things that are simple not complex to get started.

Yet, at the same time, complexity rears its ugly head. Neglect of systems software research means that the LAMP stack doesn't get anywhere near the challenge it deserves. Sure, given that stack there are a wide range of easy plays but given a sensible approach the space of easy plays would be much larger and today's start-ups in that space would be tomorrow's high school student projects. Complexity also strikes when one of the "easy plays on a crap stack" stakes off -- and then people try to extend and evolve it and therefore start feeling the pain of the precarious stack on which they found an easy play.

I think that when it comes to socially irresponsible careerism the thing I see most often is people who latch on to some "simple trick" and try repeating it for as many times as the bewildered will continue to pay. So the incentive is really to aim for simple and repeatable, not complex. Yet, when you iterate the consequences of that incentive enough times you get "easy and simple in the moment" but "a complex mess in the non-immediate time frames". It's a perfect recipe for building out a ruinous infrastructure.


The article's premise seems inherently dubious

According to the press release about the report, "many" software
designers "intentionally create unnecessarily complex products", or
"choose somewhat more difficult designs", to "further their careers".

How does this further their careers? If they're good, they do it "to
better prove their talents"; if they're bad, they do it "to obfuscate
their lack of talent".

I wonder how one could possibly prove this, or even demonstrate it.
If anyone decides to shell out the $22 for the article, it would be
interesting to know. ("The Hidden Perils of Career Concerns in R&D
Organizations" by Enno Siemsen of the University of Illinois at Urbana
Champaign. MANAGEMENT SCIENCE Vol. 54, No. 5, May 2008, pp. 863-877)

I have certainly seen software that is more complex than it needs to
be and ought to be. I have never seen evidence that it was written
that way intentionally, by someone knowing that he or she was doing a
technically worse job, in order to achieve career advancement. To do
this would be tantamount to sabotage.

Who judges a software developer's code, and might affect that
developer's career based on his or her judgement of the code? Usually
the developer's peers and/or manager, who is in an excellent position
to judge whether the complexity of the software is justified or
whether it detracts from overall quality.

An organization that rewards their software developers for creating
code, designs, or products that are too complex should be blaming not
the developers but their own organizational structure.

Me too post

I have never seen evidence that it was written
that way intentionally, by someone knowing that he or she was doing a
technically worse job, in order to achieve career advancement. To do
this would be tantamount to sabotage.

My eyes popped out when I saw that statement, too. It's nothing but an admission by management that they don't even know the product they're getting when a less capable developer can write himself a minivan or guarantee job security by choosing a solution that is way over his own head, blowing budget, schedule & correctness into oblivion.
When management is sane, the careers of these people are not furthered at all...

Napolean's Theorem

"Never ascribe to malice that which can adequately be explained by incompetence."

Having dealt with many "minivan" creators, I can testify that, at least per my working sample, they have no idea how awful their code is. This is the goad that keeps me at websites like this one, trying to learn things: just in case I'm not much better. :o)

Perhaps you also read this

Perhaps you also read this session abstract from a conference program with the same title and author. It is similarly brief, but it has some additional information:

We use laboratory experiments to test how players with career concerns choose the difficulty of their organizational tasks. Drawing upon existing theory, we subject participants in our experiments to a context in which they act as agents for a principal, and have to convince this principal of their capability in order reap financial incentives. To do so, they can increase the difficulty of their assigned task, thereby lowering their likelihood of obtaining a successful outcome. Theory suggests that the difficulty of the task influences their expected gain in reputation. The data from the experiment supports that people have a higher propensity to choose a difficult solution as the expected reputation they can gain by choosing this solution increases. We compare the participants' behavior in conditions with and without such career concerns. Further, we show that highly capable and less capable agents have different strategies as long as their choice is unobservable. However, the less capable agents tend to pool on the highly capable agents' strategy as their choice becomes observable by the principal. We also show that performance rewards reduce the less-capable agents' propensity to choose the difficult task. Interestingly, though, the highly capable agents choice pattern is unaffected by performance rewards.


Based on reading the abstract, it sounds like the experiment makes it the rational choice to make things harder, explicitly, as one of the rules of "the game". If my boss told me that I'd get a bigger raise if I produced a more complicated design, even if it increases the risk of failure, I might do the same.

The question, though--to what extent does a "more complex design" impress the boss? Certainly no real-life boss (pointy-haired or otherwise) will ask for over-engineering as a means of assessing the skill of his or her staff. There may be some managers out there who are impressed by cleverness; OTOH my bosses (matrix management, y'all) seem to be more impressed with quality work delivered on schedule, and don't award any brownie points for complexity. (And overly complex designs tend to get abused at design reviews, anyway, which the boss does attend).

Is there research which suggest the "model" boss in the experiment behaves like typical managers? If so, then we don't have a problem with programmers, we have a problem with bad managers (who are selecting a rather poor and thoroughly discredited means of evaluating their employees). Unfortunately, as noted in a previous post, many schemes for determining a professional's future compensation and job status have a tendency to reward behavior which is contrary to the organization's interest; designing schemes that don't is a difficult problem. Programming, again, is not unique.

I think it depends on the organization

In many places the person managing the software is in charge not because he or she knows anything about software, but because the manager knows the business domain. As long as the software hits the functional wickets required for them all to deliver properly, such a manager is prey to those who, even if not malicious, are the most convincing. Thus, a lot of interpersonal factors weigh into decision-making that probably should remain purely technical. This cuts two ways, as, in addition to incompetent and overly complex designs, it also often means time, money and resources are wasted on elegant code that a programmer truly believes "must be done right" when the business context really calls for a small hack instead.

The real nasties, which I've seen a lot, aren't the bosses, but consultants' bosses. Line managers just want good projects done on time and on budget. Consulting developers want to write good code that solves people's problems. But the consulting manager had better be generating billable hours or the shark next door will have his job.

The Brain Can Only Process So Much Information

Thus, making complexity bad, especially when working. (aka: "Too Many Notes!")

'So much' is the basis of measurement metrics.

Managers like that, but they don't work with the ocmplexity, they measure it, so thus, they would like to maximize it, since maximizing is what they do so they look good.

Me, I like simple, so I can use my brain cycles for more intersting things, like making my code more dense and more effecitve. It's easier to spend 5 minutes trying to figure out a line, rather than 20 minutes trying to find the line you need in 20 pages...

But it seems that more people prefer hunting rather than interpreting and understanding, at least juding from most of the code that I've seen, and 'terseness' is not how it would be described.

Space complexity and time complexity, like with alogorithms, applies to static information as well.