Variables as Channels

Method mixins - written by Erik Ernst

It is quite common to describe languages with mutable state as machine-oriented, and to describe functional, logical, and other kinds of `declarative' languages as more abstract, liberated from the old-fashioned attachment to bits and memory cells. This opinion was brought to prominence with the 1977 Turing Award
speech by John Backus [...] We must recognize that the functional and other paradigms have have produced deep and useful results. However, it is our opinion that imperative languages, especially object-oriented ones, are being widely used because of their
inherent power and not because programmers are nostalgic about
writing programs in assembly language. It is not a question of abstraction or hardware concreteness, it is about safety and freedom. Restrictive communication topologies bring safety, and flexible topologies bring freedom. To substantiate this, we need to consider variables and similar concepts as communication channels, thereby making them comparable.

This paper provides an interesting perspective on the role of variables in programming. It is about a construct called method mixins, but the discussion about the role of variables in Sec. 2 is relatively independent of the specific construct proposed in the paper:

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Power the excuse, not the reason

My experience at being an Ocaml advocate is that the real reason most programmers don't learn functional languages is an unfamiliarity with the paradigm, not some comparison of the "power" of the differing approaches. Learning a new programming paradigm is much harder than learning a new language in an already familiar paradigm. Any comparison of the power of the differing approaches is at best an excuse or rationalization.

Can programming be liberated from the Backus style? :-)

We must recognize that the functional and other paradigms have have produced deep and useful results.
Absolutely.
However, it is our opinion that imperative languages, especially object-oriented ones, are being widely used because of their inherent power and not because programmers are nostalgic about writing programs in assembly language.
My opinion is that this is not the main reason for their wide adoption. Nevertheless, I agree that stateful paradigm has its merits. The computation model I would like to explore would be a stateful reactive system in which atomic reactions are pure and referentially transparent1.

I just don't see how e.g. monads can scale up to business transactions spanning weeks, involving multiple legal bodies, and running on loosely connected hosts.

[on edit: congratulations, if this is your first story :-)]


[1update: just seen in Monad.Reader: JoinHs looks interesting, I always liked Join Calculus]

Monad Scaling

I just don't see how e.g. monads can scale up to business transactions spanning weeks, involving multiple legal bodies, and running on loosely connected hosts.

This comment seems somewhat bizarre to me. Could you elaborate? Monads are (at least from one viewpoint) an abstraction of computational models. Is it that the systems of computation you think are suitable for these tasks happen not to be monads? How about Hughes arrows?

What is it that the monadic abstraction (essentially a mathematical concept) is missing that would prevent systems which are instances of it to be used for such tasks?

- Cale

Oops

This comment seems somewhat bizarre to me.
Yes, now it sounds silly to me as well.

I see the Haskell solution to the problem of connecting the stateful body in the stateful world to a referentially transparent soul not very satisfactory (some arguments). The IO monad is not a particularly good pineal gland (if only for cognitive reasons). Arrows are definitely more general, but my experience with them is even more limited than with monads, so I will refrain from judgement.

My issue with monads is that they are not in general composable (yes, I know about transformers), which IMO leads to a kitchen sink of IO monad.

The IO monad itself pretty mu

The IO monad itself pretty much has to be kitchen sink - the trick's in wrapping it to get a restricted subset and then layering appropriate transformers on top.

Historical context

Discussed before, I know, but...

Backus vs Dijkstra.

"it is our opinion that..."

"it is our opinion that..."

Because opinions make great science! I love how dogma is a perfect basis for publishing papers on language design. In fact, it seems to be the only basis for publishing papers on language design; where you can get away with formulating your whole research around a usability argument while providing no usability study at all.

Which begs the question:

What usability studies have been conducted, under what conditions and limitations, and using what methodology?

Many of us are knowledgeable at the mathematical parts of PLT--various calculii, type theory, category theory, etc. All stuff that can be done on a computer or a whiteboard by a single scientist. Most of us have NO idea whatsoever (including myself) of conducting a usability study, or of intepreting one that has been conducted.

Now such studies are performed, obviously--and it seems that every boutique programming language's advocates will cite at least one study proving their language better than Brand X (or is that Brand J and Brand C++)? However, many such studies are ignored; they don't address issues of why (as in why one language might be better than another at some chore), and they frequently are sufficiently limited in scope that it's questionable how valid they are in the general case.

Of course, the problem is that good usability studies are hard. And expensive. And they try to measure something which is difficult to quantify. As a result, we get informed conjecture (but still conjecture) masquarading as established scientific theory; with the strongest conjectures often coming from the loudest mouths.

Perhaps, Ehud, we need a "usability" or "human factors" category, for dealing with research concerning the usability of various programming languages (and other methods for instructing computers).

Human Factors

Perhaps, Ehud, we need a "usability" or "human factors" category, for dealing with research concerning the usability of various programming languages (and other methods for instructing computers).

Actually we discuss these issues quite a lot. They usually go in the departments that reflect the nature of the paper being discussed, but perhaps there's room for a separate category.

By the way, many LtU-ers are professional programmers, so they have a clue about these issues, even if not backed by empirical studies (which are almost impossible to get right anyway). Two relevant discussions.

I am not a number... oh wait, maybe I am...

What usability studies have been conducted, under what conditions and limitations, and using what methodology?

I spent a lot of time looking for such beasts (and they do exist) but I didn't find any of them very convincing or informative.

My general position is that usability studies are useful, but in specific cases rather than for general principles. If you have a concrete implementation, and you want to refine it, a usability study can help you make the little decisions that improve its usability.
But as with all software design issues, there are unique tradeoffs that must be made in every instance that make it much harder to find universal principles that always work.

Having said that, I think that many of the things we consider "solid principles" of PL design, such as modularity and referential transparency, are in fact "human factors". Though many here seem to associate this expression with the fuzzy and introspective, I think that many of these factors are entirely studiable using formal methods. (Though my last formulation of this met with a response only from the crickets. ;-) )

The fundamental idea is that the "human experience" of programming is basically forming semantic models for source code, and that, since we already know how to study semantic models for PLs, the exact same techniques can be used to study how a programmer understands a PL.

There are likely some slight differences from a semantics that would be devised for use by a computer, but I think there will be more in common than different. Basically, I think mathematical methods are on the right track already, and that they are where fruitful veins for the study of PL "human factors" lie.

Re: oh, wait...

Ah, yes, the infamous Hamann number, discussed greatly, as of late! :-)

...if you look at the paper a

...if you look at the paper and not just what I cited then you will fund a more substantiated argument.

Concerning usability studies: The CTM book has an interesting study on how (the absense of) mutable state influences the modularity of programs.