archives

"Critical code studies"

I'm interested in hearing what people who study programming languages think of the emerging field of "critical code studies" in the humanities.

Here are a couple of descriptions from a recent CFP and an essay by Mark Marino:

Critical Code Studies names the practice of explicating the extra-functional significance of source code. Rather than one specific approach or theories, CCS names a growing set of methodologies that help unpack the symbols that make up software.

Critical Code Studies (CCS) is an approach that applies critical hermeneutics to the interpretation of computer code, program architecture, and documentation within a socio-historical context. CCS holds that lines of code are not value-neutral and can be analyzed using the theoretical approaches applied to other semiotic systems...

CCS is largely distinct from the more ethnographic work in "software studies" by people like Christopher Kelty, whose book Two Bits has been discussed on LtU. Marino held a CCS working group session this spring, and there's a CCS workshop at ACM Hypertext this year. Some important texts for the field are Katherine Hayle's "Traumas of Code" and Rita Raley's "Code.surface || Code.depth".

I'm personally skeptical—not necessarily about the general idea, but about the current direction of the field—for a few reasons:

  1. A lot of CCS work is written in dialects of crit-theory jargon that I don't claim to speak fluently (and I'm a humanities grad student), but the parts I do understand often seem deeply confused or misguided. Here's a quotation from a post by Marino on the CCS blog, for example:

    Yet, somehow, I can't help but wonder if slower is not sometimes better? (Humanities folk can afford to ask such questions.) Could there not be algorithms that do a better job by including more processing cycles? Such a naive question, I know.

    You don't have to dig very far in the links above to find many other examples like this.

  2. The focus is very strongly on imperative programming. Haskell and Scheme score zero mentions on the CCS website, and Lisp appears once (in a bibliography). In my experience this is representative of other work in the field. An imperative-only approach doesn't seem like a very interesting or thoughtful way to tackle a "semiotics of code".
  3. As far as I can tell, not one of the three-dozenish scholars listed as associated with the CCS blog or the recent working group has a degree in CS or math (most are currently in new media or English departments). Maybe this is by design, given the goals stated above, but I'd still like to see more of an indication that this "growing set of methodologies" is of interest to people outside the humanities (if in fact it is).

Is there a place for "a semiotics for interpreting computer code" in the humanities? Do you PLT folks need help "unpacking the symbols that make up software"?

DesignerUnits

One goal in a public release is influence by example. I'd like future software to sport nice measurement units.

A review sequence by depth of interest: overview, worked examples, backgrounder, unit catalogs, QuickStart.nb, and finally DesignerUnits.nb which houses code. Core sections are "Unit Algebra - Productions - Main Algebra" and "Quantity Analysis."

Strange function

I was toying with an object system in Haskell and I encountered (created?) the following function:

f :: Contains r r' => (r' -> (a,r')) -> (r -> (a,r))

where any value of r contains a value of r': the function is the best answer I have found so far to the problem of "casting" a value of r to its supertype r', doing some work on it that gives a value of type a and a new value of r' and then replacing the resulting r' into the original value of type r. To me this reminds some sort of binding, but I was wondering if anyone here more expert than me could suggest a better interpretation of this operation.