Should Computer Science Get Rid of Protocols?

Two interesting articles by Jaron Lanier on how protocols influence programming

SUN Interviev

If you look at the way we write software, the metaphor of the telegraph wire sending pulses like Morse code has profoundly influenced everything we do. For instance, a variable passed to a function is a simulation of a wire. If you send a message to an object, that's a simulation of a wire.
If you model information theory on signals going down a wire, you simplify your task in that you only have one point being measured or modified at a time at each end. It's easier to talk about a single point in some ways, and in particular it's easier to come up with mathematical techniques to perform analytic tricks. At the same time, though, you pay by adding complexity at another level, since the only way to give meaning to a single point value in space is time. You end up with information structures spread out over time, which leads to a particular set of ideas about coding schemes in which the sender and receiver have agreed on a temporal syntactical layer in advance.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Perhaps not completely related but...

I often find that it is useful to change a "serial" relationship between modules, (objects, classes or whatever) to a "parallel" relationship.

E.g. if object A has a relationship with object B that involves A making repeated calls to the same methods of B, it's often better to change the relationship so that A makes calls to different methods of B.

I find this makes code easier to understand, easier to test and easier to change (all the same thing really).

Another way of describing this is to say that one changes the relationship from being only temporal to being more spatial, and if possible only spatial.

I've written something about this here:

jaron lanier

I think Jaron Lanier should stop trying to be a philosopher he is not very good at it judging from the video form the second link.

not that i understood this article

but it seems to me that Lanier thinks he is making a fundamental distinction when in fact he is not. I think he might be on to something, but I think that it is not fundamental.
I don't understand why he rails against protocols because they are temporal then describes a procedure dependant on the serializing of a matrix, or two dimensional conglomeration if it isn't a matrix. If time is but a dimension, are temporal/spatial comparisons spliting hairs?
I think his reliance on physics models for justification of his computer modeling is fallacy. They are all but human constructs (at the level he is discussing) i.e. features of the conciousness landscape he not so eloquently graples with.
What does a retelling of "If a tree falls in the forest" have to do with his approach to more scalable programs?
I just don't get it and wish I did.

I think what we should take away from this article is that our cognitive concepts that we bring to modeling will effect the code we right. Better constructs will evolve and we are just at the begining.

Strange Philosophy

Q: What's wrong with the way we
create software today? A: I think the whole way we write and think
about software is wrong.

Q: What do you want to say directly
to developers? A: What's most important is to keep an optimistic

They're a bit out
of context, but how are you supposed to be optimistic if the whole
is not suboptimal, but wrong ?

If you look at other things that
people build, like oil refineries, or commercial aircraft, we can
deal with complexity much more effectively than we can with software.

If you
can't cite a reference that measures comparative complexities of
projects in different disciplines, you may not make such a bold
claim. Oh by the way, aircrafts are designed using software.

If you think about it, if you make a
small change to a program, it can result in an enormous change in
what the program does. If nature worked that way, the universe would
crash all the time.

The universe does
work that way: chaos theory, Butterfly Effect, virus outbreaks, etc.
Small changes in the aforementioned commercial aircrafts can also
cause enormous changes in the outcome. It is a well studied
misconception of us humans to think that big changes are caused by
big causes.

And for some
software we want small changes to cause huge differences (e.g. MD5

I would recommend that developers
read the history of computer science very skeptically. Read some of
the earlier writings by people like Turing, Shannon, and von Neumann
and try to think through how these guys were wrong. What would they
think about differently if they were starting out today?

much the worst use of resources. It just doesn't pay for
just starting out
to question the old stuff seriously since it's been looked at for
decades. What they should be doing is to question
the new
and be skeptical.

So, now, when
you learn about computer science, you learn about the file as if it
were an element of nature, like a photon. That's a dangerous

I kind of agree
with this one. I was recently going through my old textbooks, and I
saw that things like files, sockets, processor architectures, etc.
were presented a bit too much like scientific entities as opposed to
one man's/company's way of doing things. This attitude has the
potential for stalling the truly groundbreaking development, imho.

Small Changes?

The universe does work that way: chaos theory, Butterfly Effect, virus outbreaks, etc.

For some reason it seems like people either take one extreme or another. Either they believe that small changes in initial conditions always result in small changes in the system's behavior, or small changes in initial conditions always result in large changes in the system's behavior. A system has to have certain properties before it becomes a chaotic system. There are plenty of systems in nature that are not chaotic, and plenty of systems that are chaotic, but only in certain locations. However, I'm getting off topic: you are right. The Universe does indeed behave this way: small changes can result in large changes in behavior.

Small changes in the aforementioned commercial aircrafts can also cause enormous changes in the outcome.

Well, I'm not so sure I would want to fly in a plane that was engineered this way. The Tacoma Narrows bridge comes to mind. Engineers have learned from mistakes of the past to engineer a system such that it doesn't behave chaotically, if possible. We can't be perfect in our measurements, and if the design of the plane renders it a chaotic system, that is just asking for failure.

To relate this back to software engineering, maybe we need to develop some ways of mathematically analyzing our programs to determine how they behave and whether or not they are stable. I don't now if this is possible, but it might be something to think about.

Small changes in design

Small changes in the aforementioned commercial aircrafts can also cause enormous changes in the outcome.

Well, I'm not so sure I would want to fly in a plane that was engineered this way.

As I understand it, Lanier was talking about small changes in the design of a piece of software resulting in large changes in the behavior of the software. If Lanier honestly believes that the same thing doesn't happen in other system design efforts, he needs to get out more. My own experience of this type of problem has been in spacecraft design: a minor change in the mass of scientific instrument can result in changes to the required amount of propellant and the design of the attitude control system, which in turn creates cascading impacts on the design of the support structure and the electrical power system... creating further cascading changes. As a result of these "ripple effects", iterative design techniques are very popular at the conceptual design level as a way to converge on a new design point when requirements are changed.

Ideally, once you move beyond the conceptual level you'd like to minimize the ripple effects. In the spacecraft world this is achieved by developing a good conceptual design, nailing down the interfaces based on that design, and tacking on a prudent amount of margin (in terms of mass, power, number of I/O lines, etc.) at each interface. So long as the changes later in the design effort are "minor" (i.e. within the allocated margins), the effects of the change don't propagate outside of the interface in question. Presumably similar techniques can be (and have been) applied to software design. IMHO a key part of the problem is that providing "margin" (i.e. anticipating potential changes to systems on the other side of the interface, and designing for those changes ahead of time) doesn't happen as frequently in software as it does elsewhere - which is why the ripple effects Lanier complains about happen so often.

Actually, we have other probl

Actually, we have other problems, such as ill-defined interfaces, and highly-coupled, non-interfaced code units, such as any set of functions that share variables - changes in any of those functions that set the variables may affect the correct operation of any function that depends on anything that reads the variables. Worse, there may actually be time-dependent issues, even in non-concurrent code, that we have more complexity to manage. Getting to the state where it is only that interfaces do not accomodate change in other components would be a very high level of software quality.

Of course there are other problems...

Please note that I said "a key part of the problem", not "the only problem". However:

  1. Ill-defined interfaces are a problem in many system design efforts. It is addressed partly through iterative design, and partly through experience (e.g. the interactions between the different elements of a spacecraft are reasonably well understood). Obviously, software can take advantage of the former, but the latter might be problematic. Or it might be a good area for research (most likely in domain-specific areas).
  2. Shared variables have been a known problem for years. There are also multiple well-known ways of dealing with those problems. Poor design practices do not equate to fundamental problems in software (perhaps in software education).
I'm not quite sure what you mean when you say "time-dependent issues", so I really can't comment on that.

Poor design practices are a f

If it were cheaper to hire people who knew what they were doing, rather than people who had three or four years of Java training ("A degree" as it is also known), then employers might well do that. Instead, today's economics dictate that a bunch of people who can't bothered to learn anything about software are the cheapest way to build software. Maybe in a hundred years, worse won't be better. Given the current state of programming technology, cheap, ill-educated programmers find it far too easy to shoot themselves in the foot with bad design of all sorts. Bad design is the fundamental problem in software construction, and everything we do to improve programming systems goes towards making bad designs more easily avoidable.

And languages?

To make this more on-topic for LtU (which is dedicated to programming languages) why not discuss how programming languages can help with these issues (without implying other techniques aren't also important).

I think the issues you mentioned are related to the level of abstraction programming languages mandate (cf. the discussions of interface vs. signature, shared state concurrency versus other approaches, and even the fact that timing is almost always outside the scope of programming languages).

All planes work like that

The basic contribution of the Wright Brothers (who were bicycle mechanics by trade) was in seeing that airplanes don't, and indeed can't, be statically stable and fly at the same time: their stability is inherently dynamic. Many had flown heavier-than-air craft before them, but none of those flights had been controlled ones.

So relax, sit back, and enjoy the chaotic system!

Ahead of his time, or out on a limb?

It would be easy to be unforgiving to Mr. Lanier's proclamations, but since others have already taken the pleasure, I will merely comment on what little insight he might have actually shared. The idea that software is hard is something that is not lost on anyone who has had to work on a large project. The idea that we can fix it by thinking happy thoughts is a bit naive and silly. Is pattern recognition the answer? Well, that's where things start to get interesting.

I mean, FPers would clearly say that pattern matching is an important process, but not exactly in the same way that Jared intended it. However, there might not be such a distance between the two meanings as might first appear. After all, the whole point of a compiler is to recognize patterns in the language we speak, and translate them into patterns that a CPU speaks. And progress in language design generally takes the form of recognizing more and more powerful patterns, leaving less tedium to the speaker. So it might not be an inexcusable leap to conclude that an advanced form of software engineering would involve the recognition of what we consider today very high level patterns.

It is interesting that Jared is led to the process of pattern matching via neuroscience. I think that his focus is too narrow, and that it is AI in general that will lead to a software engineering revolution. Koray notes that: " the way, aircrafts [sic] are designed using software". I think that's a very important observation. Today, "Computer Aided Software Engineering" is more of a buzzword than an actual technology, and few people would claim that such tools help you build software in the way that a good CAD/CAM/CAE tool helps you design an airliner. And yet, we use software to help us build airliners and oil refineries exactly because the software helps us *deal with complexity*. It is complexity that is the bane of large-scale software development, not entrenched idioms and paradigms.

Complexity in software is necessarily irreducible past some point, because the cost of reducing it further would entail building specialized hardware or compilers, which is a losing proposition in the general case. However, I believe that building specialized compilers will be partially achieved by making it easier for programmers to define and use DSELs. But despite that stepping stone, it will ultimately not lead to the breakthroughs in SE that can only be achieved by a functioning and competent AI. The limitation is not in our paradigms and tools. It is in us. The barriers to large-scale software development are due to the cost of communication within a development team, and the fact that the software-development "program" is not running on identically configured "nodes". We build bigger software the way we solve bigger problems...throw resources at it. If a program is too big for one programmer to write, we put 10 or 100 on the team. But that doesn't scale well because of the necessary coupling among the team members, and the exponential increase in potential communication links.

Ultimately, we, us, programmers, are the weakest link. Even now it is become apparent. Our tools have not only reached an impressive level of development, they have already passed us by. Who now can hand-optimize assembly code for a multi-core, superscalar architecture better than a good optimizing compiler for a program of realistic size? Anyone claiming supremacy would have to be a veritable John Henry of assembly hacking, and would doubtless reach the same fate given the relentless march of time.

We will only see a revolution in software engineering when we perform that final act that we have always performed when faced with a task that we have mastered with our own hands: turn the process over to machines. Some day, we will not tell the computer how to build a program. We will tell it what the program ought to do, and we will let an AI decide the best way to go about doing it. Then the AI will build a custom-designed software machine that will perform our desired task. Software engineering will meet its assembly line, and we will be remembered as one of the last generations of programmers that actually built software by hand. We will doubtless be looked upon with a certain mixture of admiration and condescending derision. But at the end of the day, we will have made ourselves obsolete. Let's just hope we don't make all of humanity obsolete.

The AI winter

Some day, we will not tell the computer how to build a program. We will tell it what the program ought to do, and we will let an AI decide the best way to go about doing it. Then the AI will build a custom-designed software machine that will perform our desired task.

Color me a skeptic, but I don't hold out much hope for AI solving the fundamental nature of communication between programmer and machine. Yes, it might help for those that are end users, but that begs the question of who writes these ultra-sophisticated algorithms and user interfaces. It also begs the question of what programming language is used to take loosely formulated requirements and express them in a language that is understood by the thinking machine. I consider AI to be more helpful it we consider it to be algorithmic in nature, not a substitute for a programming language to help in constructing software. AI will can help in producing better libraries, but those libraries still have to be applied to individual situations.

Personally, I think the problem with complexity in programming languages has less to do with expressivity and more to do with problems of guarantees and reliability. We might take comfort in the machine making more decisions for us and eliminating a whole class of errors that arises from our our own short term memory failures. But AI also has associated with a whole new problem in expressing guarantees when we communicate via fuzzy logic. In effect we trade off errors made via humans for errors introduced via the machine. Right now the problem is that AI doesn't significantly reduce programming errors, but at the same time does introduce machine logic errors. Perhaps the balance in the tradeoff will some day favor the machine, but we are still a while a ways from that day.

In the meantime, methods to improve reliability and improve communication (via PLs) are likely to have a higher payoff.

Down with buzzwords.

I hate the word "AI".

Please replace all occurrences of "AI" with "heuristic breadth-firt search" and you'll realize instantly why the "AI movement" failed.

Putting faith in "AI" is almost as dumb as putting faith in XML.

Depth-First Search

You may be surprised to learn that the field of AI includes many more techniques than just heuristic breadth-first search. Depth-first search, for example. Bayesian learning for another.

No, I wouldn't be surprised.

"AI" is a marketing buzzword that obfuscates the sutuation and leads to braindeath.

When spelled out, it usually turns out that "AI" boils down to nothing more than comparing vector dot-products (or whatever), and we have been doing that sort of thing for the last three hundred years non-stop.

Somehow, being able to compare vector dot-products hasn't led to "the Singularity" or being able to write software without putting in minimal effort.

Not to mention Self-Learning

Not to mention Self-Learning Simplified Memory Bounded A* Search. I don't realy se why anyone whould want breath first search if they have a choice.


A* search is not only heuristic, but also breadth-first.

Software complexity.

Software complexity is irreducible, unlike the complexity in building aircraft. This is because whatever can be formalized and reduced in software development, already has been -- this is why we have things like compilers.

(True, we can imagine better compilers, but I doubt there can be any qualitative breakthrough in that area, more like minor productivity tweaks.)

"We will tell it what the pro

"We will tell it what the program ought to do, and we will let an AI decide the best way to go about doing it."

How is that diffrent from a very good relational/logic language? I don't see any real difference between this form of AI and abstraction in general. It's only a difference in scale.

Information != Data

The problem is that information in Shannon's Theory of Communication sense, rather than the hand wavey version used in the article and by 'information realists' in general, is that the wire communication metaphor is fundamental to the definition. It's a probablistic/statisitical notion, so the very idea of storing 'information' in memory is fallacious.

He may be using the term in Knuth's sense, i.e. Information is structured data, but it begs the question about why he's attacking the communication wire metaphor.

Also his statement about software not obeying 'Pauli's Exclusion Principle' is a bit weird, in as much as totally false.

If programming languages are

If programming languages are not part of the answer, I'm not interested ;-)

This seems to be literal nons

This seems to be literal nonsense. Such things as "passing an argument to a function" being like a simulated wire is clearly false, not least because functions as a concept predate telegraphy, and there is no suggestion that mathematicians thinking of functional application were influenced by semaphore.

As regards temporal protocols being fragile, that rather depends on the protocol - in fact, as I recall, some of the most important work in information theory has gone to create error-correcting codes.

As to what Jaron Lanier is actually proposing, it sounds like having well defined interfaces across which we pass structured data, and writing error-handling code that can classify input into a number of equivalence classes, and attempt to do the right thing. My fear is that rather than making software systems more robust, this would make them vastly more fragile, as when a component "tries to do the right thing", it may well do the wrong thing, and the error propagate throughout the computation, as each single subsequent module behaves as it was intended, with the error accummulating until it is noticable very far from the source of error.

The kind of fragility he was describing - small changes having unintended consequence - seems to be the problem that badly structured software does not employ the concept of a well-defined protocol with interfaces ("wires") between parts, but instead depends on separate pieces of code diddling shared state in the right order and the right way. This kind of shared-state programming puts me in mind of components communicating through ill-defined (or "fuzzy" if one is feeling unduly generous) patterns.

Even questioning things like the use of files for organising data storage seems to come from his total ignorance of history. Files didn't become standard just because "UNIX had them". They existed before UNIX, because we often deal with discrete lumps of data, and with early storage technologies, there was little practical alternative for storing data. While questioning their continued usage is a good idea, it is not an accident that we have them, and it is likely that we will want to use files as part of our abstraction of data storage for a very long time to come.

What juvenile non-sense.

passing an argument to a func

passing an argument to a function is like sending a message over a wire in that the sender and receiver follow a protocol. I think that was his point.

I doubt that he is correct that one follows from the other.

I agree with most of the rest of your post.

How old is Lanier?

In that sense, though, anythi

In that sense, though, anything is like sending a message over a wire - when I put mouthwash in my mouth, I put in mouthwash which "signals" bacteria to get out. While anything can be modelled with functions, doing so benefits us with analytic power; by contrast, this analogy with wires seems to be largely tenuous, and not especially helpful. Until such time as Lanier produces either a conceptual schematic for a new machine, or a new programming system, I will doubt that he is actually saying anything.

Postmodern perspective

To all those who are baffled by Jaron Lanier's writing:

It is helpful to think like a postmodernist when reading Jaron Lanier's work, as he is probably one of the most effective at the refined critical-theory art of polishing turds, especially for a geeky or technical audience who normally filter such stuff out. In particular, this essay tends to follow the deconstruction pattern quite closely: come up with a "dominant" metaphor (telegraph signals on a wire) which represents the status quo and introduce an "oppositional" metaphor ("phenotropics" and pattern recognition) with which to overthrow and bury the status quo like some paradigmatic Che Guevara. Never mind the facts; where possible, fit the facts to the dominant metaphor even if it's a considerable strain (as Lanier tries to do with function application). If you handwave enough you'll sound smart even though you seem to express a complete lack of deep knowledge of your subject material. In the humanities, attitude is everything, and deep knowledge of something is usually secondary (and can actually work against you if what you know is in terms of the hegemonic paradigm).

I wish I could give some "whys" but it's like furry art: you can't explain it, just accept it and stare, as if at a trainwreck.

Thing is, Mr. Lanier worked in the software field, so deep knowledge would be a given for him. This tells me he's either snowing everyone to get humanities street cred, or he's been toking the bong a little too much.

If only he could have formed

If only he could have formed a dialectic synthesis of wires and phenotropics, then that really would have been something we could all have gotten behind.


In point of fact, deconstruction isn't oppositional as you describe (nor is it particularly "postmodern", at least not in the sense given to that term by Lyotard).

Given a binary structure containing a pair of opposed terms in which one term was hierarchically privileged over another, it would be a "deconstructive" move to question the way that structure was assembled and maintained. The point of such questioning would not necessarily be to champion the opposing term, but to ask, for instance, whether the phenomenon designated by the privileged term could be seen as a specific and limited case of the phenomenon designated by the opposing term, and thus subject to a general law covering both cases.

That does not mean that all deconstructive argument consists in shoehorning phenomena into binary structures, and then demonstrating that the structures are bogus. If that's what Lanier's done here, and done unconvincingly, then that's a problem with his analysis ("wire protocols are opposed in a binary and hierarchical fashion to phenotropics and pattern recognition" - oh, really?) rather than with deconstruction per se.

As for "complete lack of deep knowledge of your subject material", well, it isn't just the humanities that are beset by ill-informed attitudinizing...

Speculation and daydreaming are valid

I do have some issues with what Lanier is saying, but some of the criticisms leveled against him here are unwarranted.

In fairness to Lanier, he did explicitly label his rant as speculation (although some might call it daydreaming). Thus one should not expect it to be entirely coherent or to spell out the details. It seems to me that speculation/daydreaming is an important part of research. It is really a form of abstraction that allows us to catch a glimpse of the big picture long before the details have been worked out. It can guide us in our search for the details, developing into a sort of "heuristic" (here I use the term loosely, not so much in its technical sense).

Lanier has put forth a vague idea and vision. He doesn't claim to have the details. A vague idea and vision is a first step, not a finished product. It is like the very first stages of a traditional development process (ah the irony). I think what he's trying to do is to pass this work off to the rest of the "team" for review and the next stage of the process (yay for development process wires).

Therefore what we should be discussing is not whether these ideas are immediately useful, but whether and how they can be refined into something useful.

He's failed to articulate any

He's failed to articulate anything definite enough to need refinement, largely because his criticisms are bogus, and his solutions incapable of comprehension, either by the readers here or the luminaries who commented on the edge article.

I think we're likely to see greater benefits from very well defined interfaces which can be verified (or at least falsified) automatically, with robustness in the face of change from greater modularity. Probably the best way to do that is a little thing I like to call "functional programming". All it needs is to become popular. I suspect that continuation-based rails-type systems will contribute to this, as well as systems that have novel takes on controlling concurrency.

Dimensionality, surfaces (manifolds), holography

Perhaps I'm misunderstanding Lanier's point, but he seems to be implying that surfaces (2-manifolds) have some special status. There are certain properties that are more or less unique to 2-manifolds, but these properties don't appear to be the issue and are rather trivial anyway.

Is he saying that we should focus on surfaces because they happen to be a natural fit for all the problems we work on? There are many domains that are most naturally expressed in terms of surfaces. Our experience of the everyday world is primarily in terms of surfaces. But there are also many domains where surfaces are not the primary structure of interest. Most software we write is not actually dealing with the type of everyday interactions we have with objects. It doesn't care about the shape of an item in inventory, or how the network conforms to the surface of the Earth (more or less).

Furthermore, in the same way that the holographic principle considers a three-dimensional region of space in terms of its boundary surface, many problems on surfaces can be recast in terms of one-dimensional boundary curves. The holographic principle is actually just a special case of a much older and more general idea in mathematics, which is illustrated by Stokes' Theorem, Cauchy's Integral Formula, boundary-value problems in differential equations, and so on. The idea is that if a function on a region of a manifold is sufficiently constrained (by smoothness, differential equations representing physical constraints, etc) then a great deal of information (perhaps even complete information) about the interior can be obtained from information about the boundary alone. All these cases, as with the holographic principle, protocols, patterns, and so on, reply on prior knowledge (constraints, syntax, etc) of the objects in question.