Parallel bars

From the programming-languages-for-bosses department, an article in the June 4 Technology Quarterly of The Economist, Parallel bars:

Surely this problem [(finding ways to make it easy to write software that can take full advantage of the power of parallel processing)] will be solved by some bright young entrepreneur who will devise a new parallel-programming language and make a fortune in the process? Alas, designing languages does not seem to provide a path to fame and riches.

Too bad! The article goes on to describe the problem in more detail, and briefly mentions Chapel and X10 before finishing with:

Meanwhile, a group of obscure programming languages used in academia seems to be making slow but steady progress, crunching large amounts of data in industrial applications and behind the scenes at large websites. Two examples are Erlang and Haskell, both of which are “functional programming” languages.

Such languages are based on a highly mathematical programming style (based on the evaluation of functions) that is very different from traditional, “imperative” languages (based on a series of commands). This puts many programmers off. But functional languages turn out to be very well suited to parallel programming. Erlang was originally developed by Ericsson for use in telecoms equipment, and the language has since been adopted elsewhere: it powers Facebook’s chat feature, for example. Another novel language is Scala, which aims to combine the best of both functional and traditional languages. It is used to run the Twitter, LinkedIn and Foursquare websites, among others.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Do People think Sequentially?

Another obstacle to parallel programming is cultural. “Our conscious minds tend to think in terms of serial steps,” says Steve Scott, chief technology officer at Cray

This idea surprises me. Of course, Steve Scott is not a domain expert on the human mind.

My understanding (from a few half-forgotten undergraduate courses in psychology) is that our brains are massively parallel, associative memory, pattern-matching machines. We are able to match patterns in both space (table, bookcase) and time (raining, running). We associate patterns with other patterns, which allows us to deduce the past, predict or plan the future, and even make other statements about the present based on limited observations of it. In that sense, our brains will subconsciously process many ad-hoc 'domain models' in parallel.

For a moderately complex task, such as making dinner, I would expect that we imagine many required patterns in parallel: a kitchen cleaned, groceries purchased, oven preheated, multiple pots juggled across a stove.

Those tasks must be coordinated in time based on preconditions (e.g. we can't cook before obtaining the groceries) and resource constraints (e.g. limited space on stove or in oven, limited number of hands, limited amount of time). Ultimately, we might execute a plan as a series of steps, like a CPU switching between tasks. But I don't believe we tend to think in terms of serial steps.

Achieving a goal vs giving instructions.

I'm not exactly an a domain expert on the human mind, and don't mean to start too much of an argument (or make unsupported assertions), but it doesn't much matter how people think when they're achieving a goal.

Programmers aren't doing the work; they're giving instructions. If you ask pretty much anyone how to do something, you'll get a sequential set of instructions. 'Do A, then B, if X then Y...'. Maybe that's because instructions are always sent/received between people serially (even if we do think more parallel-like), and people need to adapt to giving instructions to a computer. I don't know.

Giving goals

I've played the Peanut Butter Sandwich and Teach Me to Smoke games. I don't believe humans, even programmers, normally instruct one another in terms of sequential steps. If they do, they're really awful at it, and a lot of careful refinement is necessary to get it right.

When I'm verbally providing instructions, and I haven't done so before (such that the explanation becomes rote), I'll tend to jump around in time ("oh, and before you do Y, you need to do X") or between tasks. It will take me sitting down and really thinking about it, perhaps with pen and paper, before the ordering of goals as I express them will match the order in which they must be accomplished.

I hypothesize that our natural tendency is to instruct using subgoals, constraints, and contingencies that will effectively achieve a greater goal - i.e. we discuss 'strategies'. You say I'll get a sequential set of instructions, but I posit I will receive a loosely ordered declarative mishmash. This is something we could test, quite surreptitiously.

If that is how we communicate, then we would be served well by some variation of concurrent constraint programming (perhaps a fuzzy, heuristic one with probabilities and priorities) rather than by imperative. This way we could eschew the undesirable communication step of organizing our code in the temporal dimension.

A lot of things happen in

A lot of things happen in parallel (hierarchically). You get some strange phenomena if the corpus callosum connecting your hemispheres get cut: one side may process something and cause a reaction (like laughing) and the other won't understand why.

However... this is all pretty low level. We're (typically) interested in the cognitive side of things. Story telling is an important ability and a good explanation for a lot of thought processes. If you lost your keys, you'll replay what you did before to find them. Why do you stay away from a ledge? Because you imagine yourself walking up to it and falling off.

Sequential thinking clearly isn't the only thing we do (e.g., association is pervasive). I think both are important: I've been wanting to get into PLs for BCIs, and supporting both parallel/subconscious and sequential/directed/conscious processes seems important. I haven't looked at this for awhile, but it's fascinating :)

Edit: I had meant to add this before, but, of perhaps most interest to us, telling a story is similar to a good proof -- you try not to jump around (even though you can)

when we envision how

Seems to me, when we envision how something happens, we naturally focus in on what's going on in one place at a time, as if we were an observer who could only be looking at one place at a time — that being how we ourselves experience reality.

Agreed somewhat

I get the point about the peanut butter sandwich game, but to take the steps literally/pedantically is closer to programming without the benefit of existing libraries. People assume/imply quite a bit while giving instructions because they expect a common basis for that refinement. People are really awful at spelling out the elemental steps for doing things. I don't particularly see that awfulness as a failing of imperative instruction giving as much as an example how humans will abstract away steps that are seen as common knowledge.

Still, you are probably right as I think about it; the declarative mishmash is likely what you'll get for most non-trivial tasks.

Constraint programming is more natural

Agreed that constraint programming is more natural than imperative. That is what I loved so much in Lustre at my engineering school. But I note this flavour of constraint programming has a temporal element (pre), which is what prevents Lustre programs from degrading in a impenetrable mess. In my opinion any generally useful and usable programming language needs to express sequentiality.

Express Sequentiality

I agree. We need the ability to express sequencing and other ad-hoc workflows (e.g. do I before E except after C). But we don't need sequential semantics to be implicitly inflicted upon us by our syntax (at least, without our choosing to make it so).

I call a language 'declarative' to the extent it achieves commutativity, associativity, and idempotence in its syntax. I would say I want a declarative language. Constraint programming is a powerful model, but not very compositional. I would not want it for general purpose use. But there are many declarative languages that aren't about expressing and solving a system of constraints.

There are several problems. And several solutions.

First of all thinking is a technique. Whatever paradigm we discover for taking advantage of parallelism, it's my opinion that people will, with practice, learn how to think that way and get better at thinking that way.

Second, I don't believe that people naturally think in terms of sequential instructions. It's what we learnt studying programming in the imperative paradigm certainly, but you'll notice a majority of human beings don't seem able to successfully build abstractions and manage complexity in that paradigm. We sometimes say people "don't think like programmers" when we mean that they can't seem to think in sequential instructions and build useful patterns of them. Well, how are that majority of human beings thinking? If we can figure that out, we'll have a decent shot at saying how people "naturally" think.

It's entirely possible that people think in terms of rules. And we've seen rule-based languages (CLIPS, OPS5, PROLOG) and they've mostly failed to get any traction because they don't scale very efficiently. In the general case where assertions can change due to external influences, the basic rule-matching function is very expensive.

I think the resistance to functional languages is not so much about sequential instructions -- cause IMO people don't think that way anyway mostly -- but about the awkwardness of using the current generation of functional languages to manage, model, or respond to changing state. Especially state that changes unpredictably in response to things external to the program (an input stream). I'm pretty sure that humans do think in terms of state and unpredictably-changing state and responses to unpredicted changes in state.

Anyway, there are a bunch of paradigms, each of which some people seem good at. I mean, pick one; flow-charts are a good model for machine code or any language where you don't have built-in stack discipline. Chomsky grammars are a good model for pattern-matching languages. Function evaluation is a good model for (duh) functional languages. Petri nets are a good model for sequential allocation of finite resources of different types. Regexp's are a good model for constrained (efficient) pattern matching. Neural networks are a good paradigm for some types of self-adaptive systems. Genetic algorithms are a good paradigm for searching some very complex solution spaces.

Some of these paradigms are entirely okay with parallel processing. Some aren't. Some will require creativity to figure out how to build a language around and how to optimize that language. Some only lend themselves to little languages applicable to limited domains. But... the field is still wide open. Much of this stuff hasn't even been investigated in terms of applicability to language design.