Salon des Refusés -- Dialectics for new computer science

In the first decade of the twenty-first century, the Feyerabend Project organized several workshops to discuss and develop new ways to think of programming languages and computing in general. A new event in this direction is a new workshop that will take place in Brussels, in April, co-located with the new <Programming> conference -- also worth a look.

Salon des Refusés -- Dialectics for new computer science
April 2017, Brussels

Salon des Refusés (“exhibition of rejects”) was an 1863 exhibition of artworks rejected from the official Paris Salon. The jury of Paris Salon required near-photographic realism and classified works according to a strict genre hierarchy. Paintings by many, later famous, modernists such as Édouard Manet were rejected and appeared in what became known as the Salon des Refusés. This workshop aims to be the programming language research equivalent of Salon des Refusés. We provide venue for exploring new ideas and new ways of doing computer science.

Many interesting ideas about programming might struggle to find space in the modern programming language research community, often because they are difficult to evaluate using established evaluation methods (be it proofs, measurements or controlled user studies). As a result, new ideas are often seen as “unscientific”.

This workshop provides a venue where such interesting and thought provoking ideas can be exposed to critical evaluation. Submissions that provoke interesting discussion among the program committee members will be published together with an attributed review that presents an alternative position, develops additional context or summarizes discussion from the workshop. This means of engaging with papers not just enables explorations of novel programming ideas, but also enourages new ways of doing computer science.

The workshop's webpage also contains descriptions of of some formats that could "make it possible to think about programming in a new way", including: Thought experiments, Experimentation, Paradigms, Metaphors, myths and analogies, and From jokes to science fiction.

For writings on similar questions about formalism, paradigms or method in programming language research, see Richard Gabriel's work, especially The Structure of a Programming Language Revolution (2012) and Writers’ Workshops As Scientific Methodology (?)), Thomas Petricek's work, especially Against a Universal Definition of 'Type' (2015) and Programming language theory Thinking the unthinkable (2016)), and Jonathan Edwards' blog: Alarming Development.

For programs of events of similar inspiration in the past, you may be interested in the Future of Programming workshops: program of 2014, program of September 2015, program of October 2015. Other events that are somewhat similar in spirit -- but maybe less radical in content -- are Onward!, NOOL and OBT.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Some cool links. A few years

Some cool links. A few years ago I read about a new paradigm intended for more resilient and infinitely scalable computing based on very small computing cells on a highly interconnected/parallel fabric, thereby somewhat mimicking properties we find in biology. Something similar to this IIRC (see later work here). Certainly opened my mind to alternate ways of constructing programs.

Unconventional Programming Paradigms - UPP'04

UPP '04 - Unconventional Programming Paradigms. They have some interesting work here that's related to this thread:

Unconventional approaches of programming have long been developed in various niches and constitute a reservoir of alternative avenues to face the programming crisis. These new models of programming are also currently experiencing a renewed period of growth to face specific needs and new application domains. Examples are given by artificial chemistry, declarative flow programming, L-systems, P-systems, amorphous computing, visual programming systems, musical programming, multi-media interaction, etc. These approaches provide new abstractions and new notations or develop new ways of interacting with programs. They are implemented by embedding new and sophisticated data structures in a classical programming model (API), by extending an existing language with new constructs (to handle concurrency, exceptions, open environment, ...), by conceiving new software life cycles and program execution (aspect weaving, run-time compilation) or by relying on an entire new paradigm to specify a computation.

The practical applications of these new programming paradigms prompt researches into the expressivity, semantics and implementation of programming languages and systems architectures, as well as into the algorithmic complexity and optimization of programs. The purpose of this workshop is to bring together researchers from the various communities working on wild and crazy ideas in programming languages to present their results, to foster fertilization between theory and practice, as well as to favor the dissemination and growth of new programming paradigms.

Apart from two invited talks, the contributions are dispatched into 5 tracks:

– Bio-inspired Computing
– Chemical Computing
– Amorphous Computing
– Autonomic Computing
– Generative Programming

Space-time Programming

A nice review of space-time programming, that introduces computing paradigms inspired by physics among others, with the goal of avoiding over-specification of spatial-temporal properties that's rife in modern programming.

Thought provoking

Thanks for the links - some very interesting reading in there. The density of original ideas is very high. "Thinking the unthinkable" does a good job of laying out the case for heading towards uncharted parts of the map. I found the following section quite exciting - it hints at some of the issues that lie between researching and teaching programming languages. A taxonomy of programming ideas has been mentioned in discussions here over the years. It would certainly simplify the teaching of programming to students. Although looking at Stackoverflow as an attempt to build something of that kind - the ratios of effort to results, and signal to noise within those results, suggests that new approaches are needed.

Theoretical computer scientists attempt to extract mathematical essence of programming languages and study its formal properties. Now consider an episteme that instead aims to explore the design space and build a taxonomy of objects that occupy the space. It considers the entities as they are, rather than trying to extract their mathematical essence. What would be the consequence of such way of thinking that attempts to relate and organize programming ideas in taxonomies, rather than abstracting?

Blame the French

An interesting quote by the mathematician Dror Bar-Natan:

What a disaster it was that the French (Cauchy and his generation, and then Bourbaki) found that practically all of mathematics can be formalized! This formalization procedure seemed so powerful, that we have substituted "formal mathematics" to "mathematics", and for many of us math is ain't math until it is reduced to a sequence of theorems and proofs. Hence for example, as a group we were, and largely still are, blind to the discovery of path integrals, and we left this whole arena to the physicists, whose motivations are entirely different. Who knows how much more have we missed by requiring that the whole process, from imagination to formalization, not only be fully carried out within each mathematical context, not only be carried out in full by each generation of thinkers on each mathematical context, not only be fully carried out by each individual throughout her lifetime, but even be carried out in full within each individual submission to a journal!

Physicists' use of Dirac

Physicists' use of Dirac delta functions and their use of infinitesimals also haven't been entirely satisfactorily formalised, as far as I know. Some approaches such as distributions, differential forms, and hyperreals come close but do not capture all the manipulations that physicists do.

Simplicity comes later

This is yet another manifestation of the argument that I know and occasionally use, "formalization has a cost, sometimes it can come only later and we should let that happen instead of silencing informal work".

I find the general argument rather convincing, despite the fact that it does not at all match the way I do my own research. I usually would not feel my work complete enough before some form of formalization, just like I don't feel that a patch is complete enough before it has a clear commit message, whose writing often reveals shortcomings and prompts me to fix them first.

A safe and obvious position is then to not take side : it is good that there is a diversity of methodologies and we should preserve that.

But then it is extremely difficult to find evaluation techniques that do not, by their very design, embed a bias coming from one approach or another. I don' t think you could say, for example, "you don't have to formalize but it must be rigorous". This is a very difficult problem and I am happy to see that people keep attacking it.

I put it more simply.

People are trying very hard to develop formal justification for effects in Artificial Neural Networks which we see arise (and exploit) as emergent phenomena. Someone discovers something that works, and someone else is furious because his favorite mathematical formalism doesn't explain why it works, and throws out a year or more of work and has to start over.

In an argument with such a person just this week, I said

"It is necessary to observe reality before attempting to explain it."

My point is that producing formalisms when the facts are still being discovered is premature and almost certain to be wrong.

In PL, we have broad areas of formal knowledge about type theory, but very little rigorous observation of how human beings actually interact with constructs that embody these theories.

I believe that the type theories we have so far developed are beautiful formalisms, but only of marginal relevance to the value of programming languages. Programming language studies are likely to become useful to actual programmers only when the observations on which they're founded include the factors of psychology and 'mental ergonomics' relevant to programmers.

But a fair number of people doing PL work today are just horrified by that, because that wouldn't be math. So these are aspects of reality not yet systematically observed.

Believable

This is a story that one can believe in. I'll do my "PL person today" by pointing out that the "marginal relevance" idea cannot alone explain why, with a rather startling regularity, ideas that are important for math-based studies of programming languages (lambda-abstractions, static types, monads....) are, after some delay, found to be also important and useful for the human practice of programming.

Belief and doubt

I might allow the point on λ-calculus, with some reservations; but I'm skeptical on static types, and even more so on monads.

Heh

By "found to be important and useful" I mean a kind of crowd opinion: "there is a large enough share of practitioners that seem to agree that X important and useful to their practice". Independently of what we believe deep down, it is uncontroversial than a large share of programmers agree that static types help them ("for maintenance of large programs" is the least controversial proposal, I think). In this regard "monad" would be stretching it a bit, but "andThen" or "join" are gaining ground every day as nice patterns to use on containers/futures/option/etc.

(One could argue that the observers, namely math-PL-enthusiasts, actively engage with the programmers to convince them of these facts, so that mainstream acceptance does not have the value of an independent discovery. Or that a lot of also widely successful ideas, such as macros or object-oriented programming, were not anticipated by formalists, which suggests that also having other ways to make predictions would be valuable -- I completely agree.)

language groupies

I'm a bit wary of the reasons programmers actually use one language or another; external factors like compatibility and management get into it. I'm interested in the phenomenon of languages programmers develop abiding love for, whether they end up using them or not. Lisp has that. C. Vintage Turbo Pascal, no later than version 3. Classical BASIC. I have this theory that for any language to develop a fan base like that, it must have something elegant deep within its structure that resonates with its fans — and that this deep elegance is usually not passed on to its descendants. Descendant languages tend to be based on something like "oh, a bunch of people love that old language, so let's make our syntax similar to it". We've got a host of languages whose syntax looks similar to C; but none of them are C. Part of the deep elegance of C is its elegant notion of data as a sequence of binary bytes accessed by pointer arithmetic. Pointer arithmetic should be thought of not as a "feature" with pros and cons, but as an aspect of the core elegance of the language; and either backing away from it or adding complicating/clashing stuff to it results in a language that just isn't C. Lisp has its notion of building trees from cons cells (the elegance is more than that, of course, but that's the core data structure). BASIC is an interesting case because it's not as obvious just what the elegance is (modern ridicule may get in the way, there), but I do believe there has to be deep elegance to it or it would never have achieved its beloved status; I'd like someday to undertake a serious sunken-treasure-search for the elegance of BASIC.

One of my favorite puns is the punchline of a verse from Bob Kanefsky's The Eternal Flame (aka "God wrote in Lisp code") — "Now, some folks on the Internet put their faith in C++. // They swear that it's so powerful, it's what God used for us. // And maybe it lets mortals dredge their objects from the C. // But I think that explains why only God can make a tree."

I don't think your theory

I don't think your theory properly accounts for nostalgia.

Nostalgia ain't what it used to be

Nostalgia needs to be taken into account when considering how an old language is perceived when it's old. It doesn't figure in how a language was perceived before it was old.

Explains more than the love.

Not all languages develop a devoted sect over time. Sometimes languages inspire true hatred in those that are forced to use them. Not the kind of mild, weak, dislike that can occur quickly when a programmer is learning a language. But rather the hot, deep fury that comes when they really understand all the pitfalls in the design. People have some really bad things to say about the longterm use of java or perl.

Fire and ice

This suggests to me a couple of questions.

  • Does the first impression a language makes on a programmer, for good or ill, only deepen over time, or are there languages one can initially dislike and over time come to love, or initially like and over time come to hate?
  • How far is it possible for the same language to inspire abiding love in some programmers and hot, deep fury in others — and is there some discernible pattern in what sort of languages are likely to do this?

Definitely perceptions can change.

I think it depends on the particular programmer. I've certainly met many who never get over their first impressions. On the other hand, I passionately hated Ada the first time I used it, and now I prefer it to C. Likewise I had a dislike of C when I first came to it (from Pascal) but now enjoy using it.

My only remaining gripe with C is that there is too much behavior left unspecified - I have sought out the compiler that makes a policy of giving the most useless possible specification compliance to force me to write good quality code. I also miss array bounds checking, but that would only work with resizable arrays, which isn't the C way.

Not all languages are suited

Not all languages are suited equally well for all tasks. If you use the wrong language for the wrong task, I guess you can't be thrilled, but if you pick the right combination, you might become a fan. That's what makes .net platform interesting: an ability to choose among a variety of languages, and even being able to mix them up in a single application.

Programmers

It seems inevitable to me that some kinds of programming languages would particularly appeal to programmers whose minds work in particular ways. I haven't gotten a handle on how that works, though.

I do believe I have a handle on the most overall important inventory of how human minds work. Based on the Gordon Morse "cognitive style" model (about which I see almost nothing on the internet; the only thing I noticed was one paragraph from a 1969 book in googlebooks), augmented with a third dimension I've found to be essentially of the same primordial nature as the two of Gordon and Morse's model. GM has two dimensions, called "remote association" and "differentiation" (RAT and Diff). As a not-nearly-subtle-enough explanation, RAT is a talent for seeing structural patterns, Diff a talent for seeing key distinctions. RAT is related to finding answers, Diff to finding questions; in programming, RAT is related to being good at structuring programs, Diff to being good at debugging (where asking the right question is all the magic). Broadly, the four combinations of high/low on these two dimensions make for different types of minds (GM call them "styles") that develop different ways of doing things because they have different abilities to bring to bear, and the different types often find the methodologies of other types distressing. Diametrically opposite types (HH vs LL, HL vs LH) tend to find each other especially puzzling/distressing. My addition to all this is that there's a third dimension that interacts with these in much the say way that RAT and Diff interact with each other, that's to do with ability to retain unstructured data; I've considered the term "rote memory" for it, which maybe isn't ideal but for now I'll call it Rote. Within the HH type, I've observed there are definitely two very different kinds depending on whether they can pull in lots of raw data before figuring out what to do with it. The classic extreme case for HHL (high Rat, high Diff, low Rote), I've always felt, is Albert Einstein, virtuoso with questions and answers but with no tolerance for complexity and therefore obsessed with simplicity. Challenged to come up with an analogous case for HHH, I thought of Leonard Euler. The thing is, HHL and HHH have different perceptions of what constitutes simplicity. I vividly recall a PL conversation (many years ago) between an HHL and an HHH, in which the HHL advocated simplicity and the HHH agreed wholeheartedly, but then it later emerged that they had very different views of whether or not C++ is "simple".

So from a programming perspective, some programmers will bring to the table a native talent for structure while others will not, some will bring to the table a native talent for debugging while others will not, and there will be differing thresholds for simplicity. This suggests to me there should be rich variation in what sorts of programming languages different cognitive types of programmers favor... if one could get a handle on it.

I believe that the type

I believe that the type theories we have so far developed are beautiful formalisms, but only of marginal relevance to the value of programming languages. Programming language studies are likely to become useful to actual programmers only when the observations on which they're founded include the factors of psychology and 'mental ergonomics' relevant to programmers.

I find this claim unconvincing. When I was studying electrical engineering, there was no focus on the psychology or mental ergonomics of the formalisms relevant to engineers. We were expected to learn and understand the appropriate physics and mathematical formalisms that describe a system so we could solve that system's problems.

Lambdas seem to describe computation quite well as a formalism, and types seem to ensure "correct" operation on data quite well too. So why insist these formalisms are insufficient, instead of our education or our professional standards being insufficient? Or do you also think engineering is deficient?

Edit: which isn't to say programming languages couldn't be made better with HCI studies, but to claim there's something wrong with lambdas and types, and PLT will be useful only when HCI is applied seems totally wrong to me.

A body of formal knowledge is needed of course.

I never claimed that programming, or engineering of any other kind, would benefit by being done in ignorance of theory. What I'm saying is that we haven't paid enough attention to putting those theories into forms that are easy to think about and use.

EE has plenty of tools which are now standard that were, when conceived, in forms clunky, limited, bizarre, and hard to use. I mean, start with Calculus. Would you rather use Leibniz' "theory of infinitesemals" and linear algebra, or would you rather use the power rule and the chain rule and matrix mathematics, and the giant books beloved of engineers full of integral and derivative forms where you can just look it up? Or even the information from those same books, now embedded for greater convenience in tools like Mathematica?

When you're designing things for human use, it can take a bunch of tries and different ways of expressing things before you find one that most of the humans involved care to use. And a lot of that refinement of form and expression doesn't change the underlying theory a single jot.

I disagree with how I read

I disagree with how I read both of your claims:

we haven't paid enough attention to putting those theories into forms that are easy to think about and use

I think the theories even 40 years old (like ML) are plenty easy to use for the vast majority of programming, without even going to into more modern, expressive ones. Which isn't to say these theories couldn't eventually be even easier or more expressive, but that's not the claim you're making.

EE has plenty of tools which are now standard that were, when conceived, in forms clunky, limited, bizarre, and hard to use.

Engineering is already hard, any applicable formalism makes the hard problems more tractable. I expect engineers to learn the tools available, regardless of how clunky, limited, bizarre and hard they are to use, as long as they solve the problems at hand.

You seem to mean that you want engineers to be productive, which is fine, but that's not what you're actually saying, which is that the available formalisms are insufficient or lacking for most purposes. ML is not comparable to Leibniz's calculus in difficulty to use, or unsuitability for purpose, for example.