Ed Felten: Why Understanding Programs is Hard

In contrast to his earlier attempts, which I appreciated, this time Felten does not try to prove his point by explaining the fundamental facts of computer science. Understading programs is hard, he demonstrates here, because the results of some mathematical functions are hard to predict (in his example, a secure hash function).

While this enables Felten to give a short and easy to understand example, I think he misses the chance to explain why understanding (and predicting) program behaviour is hard in general. This would bring him back to the fundamental principles he discussed before, stemming from the halting problem. By going this route it would be possible to relate the issue under discussion to topics that are more directly related to LtU: how programming languages can help make analyzing program behaviour easier, how languages can restrict possible behaviours etc.

Be that as it may, more people outside the profession should be come to appreciate that understanding program behaviour is hard. This is something that takes time to appreciate fully, even when you are a programmer. It is one thing to know that finding bugs is hard in practice and quite another to appreciate the reasons why it is inherently so.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

It has always seemed to me

It has always seemed to me that most people have the impression that soft-ware is easy, and obviously hard-ware is hard. This is an opinion drawn mostly from attitudes and behavior not fundamental facts. Software has never been easy, but there are obvious advantages over hardware that make it attractive. These "advantages" can often be the problem with software projects.

For example in a hardware project it can easily be three to six months before you even have something you can turn on and test. Software is "instant results" from day one. The quick turn around is what everyone loves. In hardware you are forced to think through "all" the details, because if you get it wrong there is no going back. Certainly this is hard, but necessary.

Is it possible that the "advantages" of software are less real than they seem, and that these advantages could become problems? Software is hard and the "short-cuts" can turn into big delays. There is no substitute for a careful meticulous process, and this is likely to produce good results.

Worse is Better

(I'm assuming that by "meticulous" you mean slower, better planned design, as opposed to release as early as you can)

With software, it is possible to quickly produce _something_ that can be used/appreciated by other people. This phenomenon results in a completely different social perspective with respect to software than hardware. This is one of the reasons that Worse is Better. The root cause of WIB is economic: there's a scarcity of users that want some product. Let's use here the example of a programming language "for the web". If you release something that's mildly good (eg. perl in mod_perl's apache), and there's an absence of anything else good, you win. Change is costly, so there's simply more economic value in quicker turnarounds: it makes early software survive more, and live for later releases. Notice this is an argument about survivability, and says nothing of quality or anything else, which really are irrelevant as far as popularity goes. I'd be willing to bet that at the time mod_perl was released, there were better designed alternatives 'in development'. I'd also be willing to be they are not very much used.

I believe this really changes the perspective on where research in correctness (by which I mean research in how to reason statically about programs) should go: if correctness gets in the way of people getting things done fast, it will only be adopted in very specialized scenarios.

Finally, where the "meticulous" process is necessary, or culturally accepted (eg. NASA's software development practices), I don't think software is seen as easy at all.

HW and SW

There is no clear separation between digital hardware and software development.

- Digital hardware is mainly tested through simulation and, often, the actual level of testing can be higher than with software, particulary for timing issues, races and asynchronous behaviors.

- Digital HW is developped in Verilog and VHDL which _are_ programming languages, for better or worst. A VHDL bug is pretty much like a bug in any other language and the write/compile/test cycle is not very different than the write/simulate cycle.

Maybe the difference is that for a whole system, a Verilog LOC is 10 times harder to write than a C LOC, but there is 100 times more lines of C than Verilog.

Usually simulation test benches for digital devices, written in Verilog, VHDL or any other language are larger than the devices to test and requires nearly the same development effort. You spend time to test all the corner cases and even more to handle them. Maybe software development lacks this habit of testing/simulation environments, maybe it is simply not possible to test everything when the project becomes too big. Code coverage tools are also widely used for HW development.

[For example, in the GCC source tree, there is a big collection of "non-regression" codes that tests known issues from previous versions and building a whole Linux system from sources is a good exercise but there is no exhaustive test of the whole compiler]

VHDL is interesting as a two-in-one language, with a programming style specific for actual hardware and another, more relaxed, for test benches. Software programming languages could also benefit from "testing" facilities in addition to "actual" coding features.

hoare, hints on programming language design

... a programming language is a tool which should assist the programmer in the most difficult aspects of his art, namely program design, documentation, and debugging.

FP languages tend to fall implicitly into this camp, Eiffel does so explicitly.

Any chance of seeing a small sample of the "test bench" VHDL style for comparison?

Test Bench VHDL Style

... looks pretty much like plain ADA.

"Synthesisable" (=hardware translatable) VHDL and Verilog uses idiomatic forms to describe the behavior. Typically everything is described as a set of combinatorial equations ( concurrent execution ) and a set of "processes" based on the typical structure :
PROCESS(reset,clock)
IF reset = '1' THEN
count<=reset_value;
ELSIF rising_edge(clock) THEN
IF count <MAX THEN
count<=count+1;
END IF;
END IF;
END PROCESS;

Verilog have similar constructs like "always@posedge(clock)"...

The test benches, to the contrary, can use all the langage's constructs, including things like:
"WAIT FOR some_time;" to perform a fixed delay or
"WAIT UNTIL signal='1';" to stop at some events, this cannot be written like that for real hardware as it requires a 'memory' effect which must be modeled either as latches or Flip-flops.

More clearly, synthesisable VHDL is hardware oriented whereas test bench uses "software style" programming, software uses "higher level" constructs to check the hardware's behavior. If you want to implement an hardware FFT engine, the actual design can be quite complex whereas the test bench can reproduce the result with a classical algorithm (you can actually copy/paste an FFT "PROCEDURE" written in ADA) and benefit from the floating point "real" standard types. To the contrary, hardware floating point must be described as adders, multipliers, shifters, bit fields...

[There is a wealth of new languages to fill the gap between software and hardware descriptions, like SystemVerilog or SystemC. There is also currently much work on assertions.]

For software development, it could mean that if you develop your software in assembly or C, the test harnesses are best written in Python or Haskell or [fill-with-your-favorite-HLL]. It could also means that software is above the hardware, abstraction-wise but there is nothing but the programmer above the software abstractions.

(And what is above the programmer to automatically check his/her behavior ?)

going meta

Using more powerful (less efficient) constructs to test than to implement, seems to be part of a general pattern; consider the use of relational specifications to derive functional implementations ... or in this context, to check them.

Hoare believes (believed?) the opposite — that test harnesses should not only be at the same level as the implementation, they should be shipped along with: (note that he uses security where we might say safety)

The objective of security has also been widely ignored; it is believed instead that coding errors should be removed by the programmer with the assistance of a so-called "checkout" compiler. ... You should always pity the fate of the programmer whose task is so difficult that his program will not fit into the computer together with your sophisticated debugging package. Finally, it is absurd to make elaborate security checks on debugging runs, when no trust is put in the results, and then remove them in production runs, when an erroneous result could be expensive or disastrous. What would we think of a sailing enthusiast who wears his lifejacket when training on dry land, but takes it off as soon as he goes to sea?

As a user of hardware, it seems to me that a more pragmatic approach is taken there — as much builtin JTAG in-system testing is shipped as will easily fit, and the remainder must still be covered by testing "on dry land".

Both viewpoints may be reconciled: the testing that can be done at the level of a system should be (and if possible shipped along with), while any meta-level testing naturally must be done with an external harness. (although, as with breakout chips for ICEs and "verbose" flags for software, one supposes that the more test circuitry available in the system, the easier the job for the harness)

(does this trend imply that assemblers should, rather being extended with restricted macro languages, be extended with a higher-level language to program generation of assembly-level implementations? I suppose if one built such things, they'd be called "compilers", which in their current support for testing, tend to build data structures — debug symbols — instead of QuickCheck-like fragments of test code)

ogres are like onions

The Hume language may also be of interest here, as it attempts to provide a coherent, yet layered language:

Full Hume       Full recursion

PR-Hume         Primitive Recursive functions
                Inductive data structures

Template-Hume   Predefined higher-order functions
                Inductive data structures

FSM-Hume        Non-recursive first-order functions
                Non-recursive data structures

HW-Hume         No functions
                Non−recursive data structures

and presumably one could synthesize Template-Hume but verify with PR-Hume.

Programmable Logic

My experience in the embedded world is that the line between hardware and software is quite fuzzy. EE's engage in a lot of ad hoc software design. The availability of programmable logic means that the hardware is rapidly prototyped and then fixed in much the same way that software is designed. The main advantage that I see on the hardware side is the quality of tools for testing and verification.