Replicated experiments in computer science

Has anyone seen a few papers that are replications of previous computer science experiments?

I ask because someone on a blog was talking about a paper that suggest that dynamically typed languages decrease development time as opposed to statically typed languages. However I can't find anyone else who has tried to replicate this experiment.

I'm trying to find some semblance of science in computer science ;)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Somewhat related


Somewhat recently, the HCI community has been talking more about replication. Development time kind of falls into the HCI / end-user programming / human aspects of software development category, so you might find the site and discussions related to RepliCHI interesting.

What motivation do you think any research group would have to do a straight replication of the experiment that you referenced? Do many people take the paper's experiment seriously in the first place?

All factors included?

If we're talking professional programming, things that _really_ costs money are maintenance and late discovery of bugs, especially design bugs. I didn't see this being mentioned.

I think that a language that makes you create good interfaces might be a better choice in the long run than a language where implementors are able to hack a solution in no time.

The kind of studies referred to above are interesting for one case: Doing a prototype to make a proof of concept. Unfortunately, my experience is that the same implementation will be used as the foundation for the product. So if the prototype has bad design and/or bad interfaces, so will the product.

Science in Computer Science

I don't know of a replicating experiment for dynamic languages, but I couldn't resist responding on the "science in computer science" bit.

In a CACM column a few years ago (April, 2005), Peter J. Denning noted:

The perception of our field seems to be a generational issue.
The older members tend to identify with one of the three roots of
the field—science, engineering, or mathematics. The science paradigm
is largely invisible within the other two groups.

The younger generation, much less awed than the older one once
was with new computing technologies, is more open to critical
thinking. Computer science has always been part of their world;
they do not question its validity. In their research, they are
increasingly following the science paradigm.

Relative to Peter, I'm one of the younger generation, and while I'm flattered by his faith in us, Computer Science today still consists in the main of mathematicians (who value refutation and validation) and tinkers - not to be confused with engineers (who are trained to respect and demand repeatable measurement and analysis). But there is both hope and progress.

What seems to be happening as computing "grows up" is that (a) the connections between mathematical foundations, systems, and applications are growing stronger, and (b) the move to very large scale is forcing some of us to acquire a fairly sophisticated understanding of statistics - initially in areas like NLP, but also in networking. Both of these trends serve to spread rigor into the practical end of the field. At the same time, the desire for relevance and funding pulls the mathematicians out of their shells to look for ways to work on practical problems. Some of the most interesting start ups in the last decade started as algorithm insights.

Several people I can think of have had key roles in that transition. John Hennessy and David Patterson come to mind, but also Danny Lewin, Peter Neumann, John Cocke, Ralph Merkle, and many others. These people bridged some of the gap between theory and practice, and showed by example that there were serious force multipliers to be had for those people who were willing and able to master both theory and practice. What these people did by their example was re-arrange - or more precisely, expose - the incentive structure that operates on the field. All of a sudden there were good reasons for the tinkers to become good engineers, and in doing so, master the underlying mathematics of their field. As this happens, we see a slow but steady incursion of Bartley's heresy (that is: critical rationalism) into the field.

I also think that Peter (in typical Denning fashion) is understating the contribution of the "older members". Current practitioners focus on how to make things work well (or at least better). The question faced by Denning's generation, and the generation that preceded him, was whether electronic computation could be made to work at all.

The field remains a work in progress. Computer science, as a field, is very young. 160 years or so if you count from Ada Lovelace, 80 years or so if you count from the Church-Rosser hypothesis, and only 66 if you count from the announcement of ENIAC. The term "Physics", for calibration, was coined by Aristotle during the 4th century BCE.

It is sobering to think that Computer Science is only now making the transition from our natural philosophy phase to the natural science phase of our evolution as a field. It is still true that we stand on each other's toes rather than shoulders, and that we are rather more often pygmies than giants, but there are definite signs of growth and progress to be seen.