PL vs. PX

I'm beginning to wonder if I'm in the wrong field. Many seems to be fixated on just language rather than resulting programming experiences (e.g. does this code look good vs. how was this code written?). The general sentiment of the field is clearly language-focused; e.g. take this post from pl-enthusiast:

From this vantage point, PL researchers tend to focus on developing general abstractions, or building blocks, for solving problems, or classes of problems. PL research also considers software behavior in a rigorous and general way, e.g., to prove that (classes of) programs enjoy properties we want, and/or eschew properties we don’t. ...

The ethos of PL research is to not just find solutions to important problems, but to find the best expression of those solutions, typically in the form of a kind of language, language extension, library, program analysis, or transformation.

So a focus on abstractions in the abstract, which is completely reasonable. But does is it really represent programming? Not really, PL doesn't seem to be about programming. It has applications to programming, but...

I’ve picked the three examples in the above discussion for a reason: They are an approach to solving general problems using PL-minded techniques in combination with techniques from other communities, like machine learning inference algorithms or cryptography.

So...PL is not about programming, rather it is a specific kind of theory field oriented around abstraction, which has applications for many other activities as well. In that case, my disillusionment with PL is just a matter of misguided expectation.

But that begs the question: what is a good academic home to talk about programming experiences, where PL is just a minor component of those activities? HCI? SE? None of those feel right.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

re: what is a good academic home to talk about programming ex...

what is a good academic home to talk about programming experiences,

A union organizing meeting; an anarchist long hall; that kind of thing.

Academia is capitalized. It's done. It's dead. It's past tense. Welcome to a new dark ages.

Academia serves a purpose,

Academia serves a purpose, even if we often can't tell what it is. I agree that the system is broken, but there are still smart people there that I wished I had more in common with.

re smart people there that I wished I had more in common with.

smart people there that I wished I had more in common with.

Organize.

Academia serves a purpose,

Only in an authoritarian, anti-democratic society.

In a free society, "academia" is a free association and nothing more.

Organize. I'm not sure the

Organize.

I'm not sure the world needs another forked academic community. And if it does, I'm not social enough to do it.

In a free society, "academia" is a free association and nothing more.

There are plenty of things wrong with academia, but whatever community you jump to will the same. People are just flawed, and I'm much more interested in the non-personal tyranny of PL vs. PX.

Reach Out

There's plenty of smart folks who share similar vision to yours. Have you reached out to them? Start a Google Hangout or host a forum. Email some folks, invite them to come participate. There's a handful of initial folks who make enough noise on the web that you could find them, and then others will appear once there's somewhere to congregate.

Sure, that's what I'm doing.

Sure, that's what I'm doing. It just came to me that PL isn't really about PX...that it isn't even supposed to be.

where can i subscribe to your newsletter?

i say that completely seriously. i need to know there are saner people out there somewhere, who Get It. :-)

Team up with a PLT

Team up with a PLT researcher and design your IDEs for a PLT-approved language like Idris ;-) Then you can fully focus on your core strength (the IDE) and it would probably make it a lot easier to publish...

experience/language designs are not separable

If the PL design started and ended from the consideration of an abstract syntax rather than from the point of view of a programmer building their program from a computer, there isn't much you can actually do. The experiences are already designed (just heavily biased to language), all you can do is hack something on top of that, and it will just suck compared to what is possible.

To an extent IDE and

To an extent IDE and language design are not separable, yes, but most PLT results translate readily. You don't need a totally different language semantics because you're working with an IDE rather than Emacs.

Most PLT translates readily

Most PLT translates readily to a bunch of very low standards. You could always put something like the Eclipse/JDT on top of Java, but it will never be very good, and definitely not in the realm of what we can do.

I don't think that's

I don't think that's specifically related to the interaction of Java the language with a Java IDE. The combined experience of Java language + Java IDE will never be very good because Java the language isn't very good and thus sets an upper bound on quality, not because Java the language is particularly unsuitable to make a good IDE for.

Look at it this way: there has been many lifetimes of work around PL type systems and semantics by very smart people. Is it really the case that the experience is so intertwined with the semantics that we should throw all that in the dustbin and start over? Even with the advantage of designing for the experience from the ground up, that's setting the bar extremely high.

Is it really the case that

Is it really the case that the experience is so intertwined with the semantics that we should throw all that in the dustbin and start over?

Yep. Well, at least as researchers, we should look for better solutions even if that doesn't involve existing dogma. It is quite amazing that type systems research has languished in correctness land for 30-odd years when it really needed to be focusing more on assisting programmers in real-time. Very loppy.

And I guess all I can do here is provide real examples (by doing and communicating) of what a programmer-oriented type system really is. It doesn't mean that the programming environments that we eventually use are designed from the ground up, but the research artifacts that we explore and communicate these ideas with definitely can be if necessary (there is not a high overhead in going ground up if you don't hit permanent production; while the let's modify language X slightly approach has limits).

Take a look at Agda...

You'll need to be willing to push past the Emacs interface, but it does a lot of assisting programmers in real time. (And you'll approve of the fact that the language is designed with the assumption that all programs are written using an IDE.)

There's a fair amount of stuff that's standard in IDEs (such as on-the-fly typechecking and hyperlinking everything), some stuff that's somewhat unusual (using types to generate the skeleton of a program for you; REPLs that let you evaluate any term, including ones with free variables), and some stuff that's pretty wild (running theorem provers to straight-up guess the program you meant to write).

I've seen the Agda's demos

I've seen the Agda's demos before (someone, you?, pointed them out in the past). It would be interesting to hear about the story of its design from a experience perspective, rather than a PL perspective.

Rustan Leino's Dafny also counts. Hmm, that's interesting, there was a workshop on Formal IDEs in 2014 (see The Dafny Integrated Development Environment).

Thanks

I didn't know about that workshop -- the slides are all online, so thanks a lot!

or Epigram

for an IDE that is entirely inside Emacs :-)

Can still team up

You'd need to find some PL theoretician who's willing to give theoretical basis to the experience you design.* Maybe you have few people to choose from, but as long as you can make the number non-zero, it could work.

Or do you claim that the experience you want can't be backed by a theory? Martin Rinard's work is extremely odd yet (I heard from people I trust) theoretically very rigorous. And in fact, there is already some theoretical work motivated by usability problems — a colleague just told me about "Practical SMT-Based Type Error Localization" from ICFP, which tries to address the bad type errors from Hindley-Milner.

Otherwise, the SE community is more sensitive to the kind of topics you like. The problem there might be that they favor overly rigorous experimental work, and doing something innovative so rigorously is too hard—IIUC, some cognitive scientists have to do extremely boring work for the same reason.

*The problem there is, that PLT guys aren't beginners, but more likely advanced programmers (to the extent they program), and lots of beginner-oriented shortcuts make the life worse for those advanced programmers who expect consistency. Subjectively, after a very steep learning curve, but now I've had my best programming experiences in Haskell, while I use Scala for crazy type system hackery that Haskell/Agda/Idris don't support well.

The design vs. science

The design vs. science debate is pervasive in all fields, including PL as practiced today (heck, even HCI has this debate) - it doesn't go away by forking a new field :) That question is: what does rigor buy us? Does it actually move the world forward, or is just ceremony? I've been tricked by way too many papers with interesting design ideas, yet find the main content of the paper is a formalization with no interesting twists or elaboration on the design. Sometimes the papers acknowledge that, their real contribution is design, the rigor section is to appease the PC gods and can easily be skipped. Rigor can of course be useful, but rigor alone does not make knowledge useful, and knowledge can be useful but not rigorous.

If you are going to be a successful academic, definitely pick a field where rigor comes easily; e.g. experimental cognitive psychology rather than HCI. If the path to rigor isn't clear cut, then you are either stuck faking it and/or people will question the scientific merit of your work (because, well, it might not really be science, and if you aren't wearing a white lab coat, who are you?).

There is plenty of PL in my current work (typeless, virtual time machine), I just don't focus on it: it is a detail to an end (the experience of live programming), and not the end in itself. The problem then becomes, do I spend my time communicating details, or do I keep working toward the end goal? The answer is yes, of course, which isn't very satisfying. Teaming up is all fine and well, but everyone has their own research topic, and you are mostly responsible for developing and communicating your own ideas. You are welcome to build off of someone else's ideas or systems, but cooperation only comes with aligned goals.

delegate

you need a student / intern to grok and publish for you. :-)

More acceptance in SE & HCI & DB than traditional PL

Looking at our group's work in the area, we found a lot of acceptance -- and understanding -- in SE & HCI communities due to their background in tricky human topics. E.g., OOPSLA & UIST, and heck, if you write well & demo on-target, systems conferences love applications. (I know less about the European conferences, e.g., ECOOP.)

The POPL/PLDI world isn't really equipped to deal with this stuff. Basically, when compared to great papers with proofs or optimization numbers, reviewers are unsure of the corresponding bar for a paper without them. That's changing at the level of individual community leaders getting involved in the messier stuff, but unless "you're in" enough to get such champions, I'm skeptical at the review level.

A useful question is.. what's the point? Non-mainstream papers at POPL/PLDI seem more about prodding than heightened discussion. Ex: a POPL crowdsourcing paper may be fun, but to get good feedback, the people who truly understand the literature on stats, crowdsourcing, surveys, practicalities, etc. aren't there.

Edit: An alternative perspective is that, as computing has reached billions of people, programming has entered almost every speciality. Consequently, a lot of the interesting stuff in PL... is outside of the traditional PL community. For example, I'm thinking a lot about interactive programming nowadays, e.g., notebooks & visual abstractions, but these end up being more in the statistics & DB & infoviz communities.

Who cares about

Who cares about publishing...unless you are playing the game of chairs (tenure track), winter (paper submission season) is not coming. But you want other like minded people to talk to, to share ideas, present your work. You want to go to a conference and not be bored to death by 90% of the talks. You don't want to be the only one cringing when someone's PL demo is suffering from 5 second turn around times.

Consequently, a lot of the interesting stuff in PL... is outside of the traditional PL community. For example, I'm thinking a lot about interactive programming nowadays, e.g., notebooks & visual abstractions, but these end up being more in the statistics & DB & infoviz communities.

Right! But that has been true for a few years now, especially as PL has become less about programming (or was it ever?).

I'm heading off to strangeloop in a couple of weeks, so we'll see how that goes!

dominoes

Not so much about getting published, but being with the right people for peer-review and then in-person.

Strangeloop is fun! I always liked seeing the variety of "real" problems. Probably for the same reason, I'm not always as thrilled by the solutions ;-)

One oddity of being a PLer in non-PL lands is that it becomes more of a community of practice, with depth being in the problem at hand. So, still not necessarily the feedback you're looking for..

Shipping

More important to ship a working system now, but always more to do. Having actual users (or at least something like minded people can use and evaluate directly) would go a long way to creating virtuous feedback loops.

Who works in PX?

By which I'm guessing there are no relatively formal lists: research groups proclaiming their PX output, journals, conferences etc. So then I would think about ways to build a more informal list from other starting points:

Which five papers best express current research directions in PX?

There is nothing seminal, it

There is nothing seminal, it isn't a formal field. But I think Bret Victor is a great start (although it needs a bunch of clarifications).

seminal

Really? The Smalltalkers, Brad Myers, Alan Cypher (or other PBDers)... and more recently, I'd add the empiricists, even if they are more text focused :)

If we are talking about

If we are talking about seminal publications, there isn't one that starts a field of research, though there are plenty of tries. The empiricists are busy enough trying to start their own thing :)

Perhaps the closest might be the Self "Programming as an Experience" paper (ECOOP 95); abstract:

The Self system attempts to integrate intellectual and non-intellectual aspects of programming to create an overall experience. The language semantics, user interface, and implementation each help create this integrated experience. The language semantics embed the programmer in a uniform world of simple objects that can be modified without appealing to definitions of abstractions. In a similar way, the graphical interface puts the user into a uniform world of tangible objects that can be directly manipulated and changed without switching modes. The implementation strives to support the world-of-objects illusion by minimizing perceptible pauses and by providing true source-level semantics without sacrificing performance. As a side benefit, it encourages factoring. Although we see areas that fall short of the vision, on the whole, the language, interface, and implementation conspire so that the Self programmer lives and acts in a consistent and malleable world of objects.

So nothing is new under the sun of course. I think I would have been much happier in the 90s :)

Or earlier...

What about the work by Seymour Papert? I know that it is focused at a more specific problem domain: the experience of children learning to program. But does it add anything defining a body of work relating to PX? Many other branches have sprouted from it over the years...

Both Bret Victor and Chris

Both Bret Victor and Chris Hancock are heavily influenced by Seymour Papert (as are the live coding crowd, its our common lineage). But Mindstorms is a bit too philosophical to get much out of it directly (Bret Victor's essays and Chris Hancock's dissertation on the other hand, have more meat, but they also come in 20+ years later).

I wasn't aware of this

I wasn't aware of this history. Thanks!

Engelbart

as well, I should think / hope!

If there was a portable self system

for windows/android/ios, not just linux and osx then I'd use it.

I guess self is an improvement over Smalltalk.

Replacing objects with slot based systems seems like a reasonable way to simplify composing user interfaces. The horribly over-engineered model-view-controller model was smalltalk's biggest mistake. It's an amazing failure to take a language as immune from boilerplate as smalltalk and to create a user interface library that makes all design an exercise in repetitive boilerplate - and to do it for the sake of functionality that people almost never want.

The things I would have like to have seen added to self would be homoiconicity and macros.

...
Though self isn't all that different a language from lua which IS available on everything.

What's missing from lua systems is the experience, the embedded windowing system, programming system, code browser, debugger etc.

If someone built that on lua, it would look a lot like self. I might do that myself one day.

That is the difference. A PL

That is the difference. A PL person might see Self and think hominicity and macros...a PX person might see Self and thinks more Morphic. Here we have both :)

Sadly

a PL person from the 60s or 70s might think "homoiconicity and macros"

A PL person these days would think "get rid of useful things like variables that can iterate, loops that aren't recursion, being able to specify what order things happen in without monoid boilerplate, and add a type system that could choke an ox (but which is capable of formalizing 'sometimes data format depend on the data in preceding it') that dovetails into a proof system that only a handful of graduate students and researchers could ever understand - which makes programs some 1000 times longer, unreadable, that no one would ever use for anything and that couldn't prove much more than 'adding positive numbers together makes them larger' and 'some programs finally end.'"

PX?

I have to say, this "X" thing, be it "user experience" (UX as an excuse for lame GUI's that do fancy animations) or "programmer experience", which sounds like a chick flick "feel good" requirement for programming, is, well, grossly overrated ..

Certainly, academia and academics can and should be criticised for missing the core point that programming is a commercial activity, not a game of theory. IMHO academics can often get so involved in a highly refined esoteric area that they can miss the fact there are alternate modelling techniques. The application of type systems that grew out of lambda calculus and were designed for functional programming to a more general context is one such example.

But having said that, PL theorists are doing vital work. Some years back, it was said only 100 people in the world understood Quantum Mechanics and even less Category Theory. But CT is the basis of all modern PL theory and we, ordinary programmers, really need the esoteric studies to continue.

The core problem in software development is the ability to compose things to avoid continuously re-invention of the wheel and the disgusting labeling of the creative art of programming as engineering. Sure, I want a good programming "X"perience. I want to write any given pattern three times and after that, automate it away.

We, ordinary programmers, need the academics. (Or, since I tend to agree with Lord, specialists in associated mathematics).

My wife is a UX designer and

My wife is a UX designer and they don't do fancy animations, at least not primarily (and animation design is often considered a more specialized non-UX field). UX is real, and it means the difference between a crappy UI, and one that works (and sells lots of..say iPhones). Consumers, and developers, are demanding better experiences, or they simply won't buy your product (or use your PL). But ya, are people out there who consider technology alone...they don't really grock why that android phone has more ram and a faster processor than an iPhone, but why does it still suck? (Note, Google is getting better at design. Someday it will coach up.)

I say there is enough room for PL and PX. I'm not denying the value of PL, just that PL has disavowed much of the task of improving programming in exchange for "branching" out to other activities while focusing on language abstractions. A PX academic can also be just as academic as a PL academic*. It is not about missing the point of programming as a commercial activity, but rather focusing on one narrow aspect of programming (languages) rather than other aspects that make up the whole programming experience.

* unless we reject that design-oriented fields can be academic at all, which might be true. But there is still a need for high risk exploration (in both PL and PX), which has traditionally been the scope of academics who have the luxury of pushing envelopes.

UX design

I have no doubt people that do UX do real work, but it isn't programming. It's more psycho-artistic: what gels with the consumer mindset. Why I use a Mac in preference for a Linux box, despite the lack of a good package manager. I have to choose between a clean GUI interface with posix support and a huge mish-mash of bits and pieces with multiple (but mostly good) package managers, well I've gone for the daily UX as a priority over the occasional need to grab software.

However, IMHO it's an almost entirely unrelated art form. PL is mathematics and is largely isolated from psychology. Not entirely, since programmers have to USE languages, but most of the usability comes down to a rather abstract concept: composition.

User interfaces on the other hand surely have a solid conceptual basis but there is basically no theoretical foundation for it and isn't likely to be for quite some time. There are some highly specialised algorithms, for things like automatic layout: iPhone actually uses some kind of linear optimisation you have to fiddle to get the result you want. TeX does something similar to calculate line breaks.

So again: I'm not saying UX isn't important if you want to sell a product, people buy cars because they look good, not because of their engineering qualities. But the workings are based on mathematics and the commercial side of that is all about composition. It's a different field entirely than getting the appeal of the body shape to fit the human eye. (Why are all Ferrari's red??)

A UX academic is like a psychology academic. Their field is so immature they need to be in the Arts Faculty, not the Science Faculty (except perhaps neuro). PL design is hard core maths. UX design is all about fashion.

in a word,

no.

> UX design is all about fashion.

HCI studies UX academically

HCI studies UX academically (e.g. empirical human behavior). They might not be able to study design very well, but they definitely practice it. Design is not art, it is actually a peer to art and science, with its own methodology (design thinking) that is directed to solving problems (unlike art, which is a form of expression). Obviously, there is some overlap, and some design fields include a bit of art (form) as well (industrial design and fashion). But interaction design (American definition) is pretty logical; you spend all your time wire framing and doing user research. Design doesn't just say that the car is red, but where are the doors, do they open out or up, is the steering wheel in front of the driver or behind her? Aesthetics are a very small part of design.

There is more to a PL than clever engineering. Knowing what to build is as important, or maybe even more important, than knowing how to build it! (Design is the declarative and engineering is the imperative....yikes!). So even PLDI has a D in there that used to mean something, and it didn't mean hard core math.

UX design is an art.

Because there is no clear "better" with UX, it seems to be an art. Algorithms can be studied mathematically (for time, space efficiency etc.), but programming and PL design also seem to be artistic like UX, as better depends on opinion. Knuth called his book TAoCP for a reason.

Scientists and engineers

Scientists and engineers tend to poorly understand both design and art, they are quite different! Just because a different problem solving process is used than what you are used to doesn't make design an undirected expression-oriented process that can be called art.

Art and Software

I look for aesthetic beauty in code. A good program is elegant and we'll structured. However not everyone sees the same things as beautiful. Art can be found anywhere, and it is the act of calling something art that makes it artistic. Simply putting code in an art galary makes it art, and choosing what to display defines the aesthetic. Knuth's TAoCP is a book about Art (with Knuth as curator) just as much as a book of Picasso paintings with commentary is a book about Art. Just because something is useful, does not mean it is not art.

However I think Knuth meant an art in the same sense that a bachelor of art (BA) degree is awarded, and it was in that sense I meant it was an art. It can also be art though, if we choose to display it as such.

About goals

Art in its truest sense isn't meant to solve problems but to make statements. Design on the other hand, is problem oriented. You can add artistic aspects to a useful artifact, but the art is separable. Your tidy code is not art because you aren't trying to make a statement with it, it is just tidy.

Of course, the term art, design, math, and science have been overloaded beyond recognition.

Marcel Duchamp

I'm afraid you cannot tell me what is and is not art (that's part of what makes it art). As Marcel Duchamp created "Upended Urinal" as art, even though it is a totally utilitarian thing, so I display an implementation of the GCD algorithm as art. It's not adding any "artistic aspects" like a pretty font that makes the GCD algorithm art, but the way you look at it. Open you eyes, and say "This is art" when you look at the algorithm, and you might get it. Art is in the eye of the beholder, the creator does not have to be making a statement. Artists making a statement are often more likely to be involved in artifice than art. Many classical painters were simply painting a portrait, in the way we take a family photograph, as utilitarian as the urinal.

By that def, then anything

By that def, then anything is art, which is not very useful. So if you want to, call anything you want art; art is in the eye of a beholder of course. But if you ever get to work with a professional designer, the quickest way to piss them off is to call their output art. It's not art, it's design. It is as if I called your program a magical spell because I didn't understand what went into creating it.

Here is the header on art in Wikipedia:

Art is a diverse range of human activities and the products of those activities, usually involving imaginative or technical skill. In their most general form these activities include the production of works of art, the criticism of art, the study of the history of art, and the aesthetic dissemination of art. This article focuses primarily on the visual arts, which includes the creation of images or objects in fields including painting, sculpture, printmaking, photography, and other visual media. Architecture is often included as one of the visual arts; however, like the decorative arts, it involves the creation of objects where the practical considerations of use are essential—in a way that they usually are not in a painting, for example.

...

Until the 17th century, art referred to any skill or mastery and was not differentiated from crafts or sciences. In modern usage after the 17th century, where aesthetic considerations are paramount, the fine arts are separated and distinguished from acquired skills in general, such as the decorative or applied arts.

So perhaps we are bumping up between the definition of fine, decorative, and applied arts?

Professional Designers

I work with professional designers every day, and some (but not all) of their output I would consider art. I think they would be proud to have produced something considered art and have it displayed in a gallery.

As to the first point, yes anything can be art, but not everything is art. Art is in the appreciation not the process by which the art is produced.

Edit: I disagree with the Wikipedia definition of art above. That final statement about practical considerations of use not occurring in a painting is clearly rubbish. A painting is to allow the observer to see something (a person, location, or imagined concept). A painting with invisible paint would obviously be pointless. What we consider high art now, may have been something completely practical prior to the invention of photography. In fact photography forced a re-evaluation of art, that eventually lead to modern art.

You should try it and see

You should try it and see what happens. I dare you :)

I made some edits, art is a broad term, it includes applied arts, which is often (but not always) what we mean by visual design. The problem is that art to most people means fine art.

Edit: it is ok to disagree with wiki, dictionaries, college departments, and so on. But it does make having conversations difficult when what you call an apple I call a banana. Another one:

In Western European academic traditions, fine art is art developed primarily for aesthetics, distinguishing it from applied art that also has to serve some practical function.

Note that word meaning is attsched to tradition. You can still disagree with it, but maybe would avoid arguments to those who follow Western European academic traditions :)

Imagine if PL was like that, what is a programming language.....or what is a program, or what is a computer? Damn, we could argue about is science all day also.

Science is easy

Science is easy to define, it is application of the scientific method. If something cannot be disproved by experiment, it is not science.

Some UX therefore seems like science, prototypes are tested with users. Good UX can focus a lot on metrics (time to find and complete operation X). We test UX with a group of users, and a list of tasks for them to accomplish.

Programming is art and UX design is a science :-) not what I thought I was going to say.

Empirical

If you add "empirical" to the beginning of most fields they become more scientific. I'm not sure exactly what empirical programming would be although I have a fuzzy warm feeling that it would seem quite scientific. Maybe it has something to do with working programs that people sell for money or something... :P

Science is defined by

Science is defined by application of the scientific method. Design generally relies on design methodologies (e.g. design thinking). Attempts to apply the scientific method to design haven't been very successful, though every 20 years or so there are attempts to sciencefy design.

User testing and AB testing have been applied to design as well, along with other heuristical approaches to domains where empirical methods don't work so well (e.g. Expert walk through for professional tools). But this isn't necessarily science, especially when done from the perspective of building a product vs. a scientific study of existing designs (the former takes shortcuts because they aren't interested in doing good science, just producing a successful product).

i dare you

> A painting with invisible paint would obviously be pointless.

i bet you could google up a few examples of things along those lines. Gosh, they should christen it something like, i dunno, "Conceptual Art"? :-)

New Clothes

Well if you're into paintings made with invisible paint, I've got this wonderful tailor I should introduce you to. The most amazing clothes that only intelligent people can see. I'm sure you would agree they are the most wonderful clothes you have ever seen.

if you don't understand art, stay out of the auction house

All this thread does it tell me some people here really do not get Art in the widest, historical, modern sense of the term. I have not been saying anything radical or wrong or weird by those standards.

learn you good up some art.

It is more tradition than

It is more tradition than standards, but I otherwise agree. I'm totally staying out of the auction house.

Are you buying?

Sure, i have got a couple of blank canvases I can put on eBay if you are interested in bidding for them? I am not quite sure which one of us should be staying out of auctions.

i apologize for still not getting the point across

and for this getting so OT. i guess we should kill the thread. :-)

Art for nerds, finally explained!

Take the oldworld as an input, manipulate the oldworld through some act like creating a new object within it, hereby creating a newworld. Hand over the newworld to a receiver primarily for aesthetic purposes. The receiver should be enabled to have an altered experience of the oldworld together with the meta-knowledge of the intentional production of such an experience. From now on the newworld receiver is able to see the world through the filter of the artwork, which changed the oldworld.

This doesn't preclude that the GCD algorithm or a natural object like a tree cause very similar responses than artworks on the side of a receiver who experiences them. Actually an artist could seek the GCD algorithm experience and either tries to reproduce it or reflects on it through another object. A lesser artwork would be inferior to the proper experience of the GCD algo. Also an artist who creates the original source of a new experience is usually perceived of higher rank than an artist who acts within the paradigm created by another one or just tries to reproduce their work. However there are exceptions in particular with work which sinks down to a predecessor status, one which didn't yet live up to the full possibilities realized by a later artist.

Be careful what you wish for

it should properly be seen as a subtle blend of psychology and extreme violence.

er, i mean: nuance. https://www.google.com/search?q=google+color+blue+resigns

There are many kinds of art

It seems that implicitly you have created a dichotomy: fields in which "better" is clear, and fields in which it is not. I would suggest that there are three options:
* Fields in which metrics are universal (maths, CS in the sense that smaller and faster can be measured)
* Fields in which there are no metrics ("fashion-driven"), and this is how much of the arts looks to an engineer, although their perspective is not entirely objective.
* Fields in which there are stable local metrics - families of problems with better solutions. I would call these context-sensitive metrics and ruminate wildly that "designers" have been trained to juggle these contradictory and non-universal approaches.

To an untrained observer the 2nd and 3rd case would look the same. Whether or not they are is much harder question. Of course I think the issue is much more interesting as if you take a "real" / "hardcore math" / "important" subject, such as say our own, then you can see a mixture of all three areas within it.

The only fields where the

The only fields where the metrics are clear are those based around specific problems (e.g. the error rate for a speech recognition benchmark). PL is really not such a field, where differences in opinions are mostly glossed over for the sake of academic harmony.

Most of all of us are actually working problems with no clear success criteria. Aka wicked problems.

Really, the only thing that differs is methodology (science, math, engineering, design?) and how we pat ourselves on the back (I sort of won, yeh for me!).

Art, Science and Engineering

How would you describe the difference between a bachelor of science (BSc), bachelor or art (BA), and a bachelor of engineering (BEng) degree?

At a higher level everything is the same. Hence they can all go on to become doctors of philosophy (PhD). So there both is a di-(tri?)-chotomy and at a higher level there is not.

Philosophy, I forget about

Philosophy, I forget about that one. We are doctors of philosophy, but I don't feel very philosophical.

Well...

To be honest I would not stir that particular mess. I've got the definitions of a BSc and a BEng pinned up on my wall as it does influence setting coursework and grading decisions from time to time. I've not come across BAs in the wild so I know nothing about them.

Tacking wildly back towards the original point: is it true that there is no "better" in UX, or is more accurate that there is no global "best" within UX and each solution can only be evaluated within the context of the particular problem that spawned it?

Partial orders are very different from either total orders, or a complete lack of order. It's not clear to me where UX lies within that landscape, and I've seen some really interesting, and some really terrible work, both labelled as UX.

Galloping back wildly towards the retreating horizon behind us: if a tool to increase programmer X (productivity? happiness?) cannot be measured using some form of global, independent measurement, then it is easy to argue that it is not science. It is much harder to argue that it is not interesting, or that design not have a role to play in appeasing programmer perceptions of X.

objective subjectivity

There is no universal best, of course. This should I hope be patently obvious to everybody here with respect to simple ASCII programming languages at lest! Do you want to use wildcards in your import statements, or forbid them utterly? What color do you want the bike shed to be? Is correctness more important than performance?

For "Design" in general, there are various scientific data which can be used to make informed decisions. There are always contextual issues and nuances. There are also too many different frames of mind in the world. There is also data that can ruin all the fun.

So you can point out that in [these situations] with [these subjects] trying to do [this task] then Design A is [bad in these ways] whereas Design B tests out as [better in these ways]. But of course that's just a sample of all possible variables.

experience in what scope and for which programmers?

It's normal to be disappointed and disillusioned over time, after you delve and find things don't quite work as advertised, even broad things like vocations. (This is basically a cliché, going back at least to Voltaire's Candide, which parodies Pangloss's philosophy that this is the best of all possible worlds. "Pangloss deceived me cruelly when he said that all is for the best in the world.")

When you use PL as more than just an acronym for programming language, do you mean an industry zeitgeist or something more specific?

Does experience include subjective things going on in the mind of a programmer? Or just objective things we can see about how user interface software behaves? (Clarity is not an objective quality.) If subjective mental models are included, does it only apply to a first author? Or to each subsequent programmer who comes along and reads code afterward? How much is coding like a writer communicating with a reading audience later? Does experience design apply to all phases?

People carve up the world different ways when deciding what needs a problem-solving fix, often assuming certain givens as context are beyond question (even if perceived). A model goes something like: we always do X, then problem Y happens, so we work out solution Z. That X is always done is beyond question. What if you just eliminated X? That might fix Y very elegantly. But you're a fish and don't consider the possibility this pool of water didn't need to be here. No water? Heresy. Anyway, often a larger scope is ignored, about how you get to that situation.

Everyone considers their role necessary; proximity makes immediate problems loom large, while things other folks worry about are dim and small. Unless you develop IDE user interfaces, maybe ways to optimize creation never occur to you. Instead reasoning might be: now that I have this big pile of code text to process, which came from somewhere irrelevant, how can I optimize structure based on assuming this given and what I want done to it?

Similar myopia comes up in lots of places, including how often processes start and stop. It's easy to solve problems assuming your process has been running since the beginning of time, and that termination is a weird eschatology problem of theoretical interest only. So you think: what kind of object interfaces are best, assuming my process is the only one that matters? Since my language is the only one that matters, what is the best way to address this odd situation? That the world will try to route around you doesn't occur to a lot of people.

PL = the community of PL

PL = the community of PL academics/researchers, as defined by pl-enthusiast, which I think is actually quite representative.

"Experience" here means the "experience" of programming; i.e. what the programmer does to create a functioning program or component. That includes (a) designing their programs, (b) inputting them into the computer, and (c) debugging them where (a), (b), and (c) don't occur in any particular order and are often repeated in a loop. But it also involves reading code, reusing code in your programs, writing code to be read and reused, and all the other tasks that you might care about while programming. It might not include more broad software engineering problems, but there is obviously a lot of overlap (programming is a part of SE, but the venn diagram of PX and SE are just overlapping, like PL).

Really, "programming experience" is just short for "programming", but that term is a bit too vague (I study programming and design tools/languages to make programming better).

Mental models are obviously involved but are difficult to describe in a rigorous manner. I don't believe we can easily use these directly, and I'm definitely sure that I cannot use them directly.

Problem solving where constraints are added or removed "to see what happens" is part of the design thinking process. It is exactly how designers are trained to approach wicked problems that lack clean solutions.

Is PX macro to PL's micro?

I seem to think solely about PX as opposed to PL lately. (The usual stuff, with more focus on answering what instead of how.) I don't post about it since it seems off topic, being mostly environment design for PX, dwelling on concurrent lightweight process tools.

"Experience" here means the "experience" of programming; i.e. what the programmer does to create a functioning program or component.

Hmm, so that includes contextual stuff perhaps outside the scope of what a language addresses explicitly? That would include emergent distributed relationships you can build out of whatever language primitives are at hand. Concerns seem more macro than micro in scale. (Economics distinguishes between micro and macro, but we don't in programming.)

Might have mentioned this

Might have mentioned this before, but it's relevant and confirms part of your thought.

Richard Gabriel [1] explains that the PL community stopped studying programming systems (like, I'd say, Lisp, Smalltalk) and switched to studying programming languages. Even Scheme fits in the PL tradition, and Racket even more so.

For another example (not by Gabriel): Go has been criticized as boring by PL people (and I agree), but what's cool about it is the programming system (even though not necessarily for what you care about, not sure). I realized this when watching Gabriel's presentation, and a couple of days later, Pike agreed on Go being about the system.

[1] "The structure of a programming language revolution", https://dl.acm.org/citation.cfm?id=2384611

Ah, that is a great find! I

Ah, that is a great find! I guess I didn't read that paper close enough (and I was at the presentation also...doh!).

A lot of interesting PL systems come from people who aren't into PL, and that is not an "in spite of" :)

"Engineers build things;

"Engineers build things; scientists describe reality; philosophers get lost in broad daylight." Definitely should go in my quotes file, right next to Richard Cavendish's observation (A History of Magic, 1977), "The religious impulse is to worship, the scientific to explain, the magical to dominate and command."

Engineers build things

Rephrase to match reality

"Engineers build things, scientists describe models that they hope has some semblance to reality, philosophers sit back and laugh at the futile hope of scientists"

Designers gather

Designers gather requirements (functional, etc.) needed to satisfy the customer and other relevant external parties. Artists express themselves.

The reason designers often take offense to their output being called fine art is that they are solving problems, they are not expressing themselves. Insinuating that their output is artistic is to accuse the designer of not focusing on solving problems. Form must follow function.

The Cavendish quote

The Cavendish quote admirably focuses on what practitioners are trying to do without pushing opinions about how well they succeed. The Gabriel quote ridicules the effectiveness of philosophers, a criticism that has its place but is not the same class as Cavendish's observation. An above comment ridicules the effectiveness of scientists. I'm also cynical about this description of designers. Ideally, perhaps, that may be what designers do, but then, ideally scientists explain the world and philosophers explicate Truth. Under various circumstances, gathering requirements supposedly needed to satisfy external parties degenerates into devising excuses to justify what one wants to believe (the "statistics" element of triad "lies, damn lies, and statistics").

Anything consumately well done — including engineering and science — involves an element of artistry.

These labels are just

These labels are just ideals, we will often fall short of those ideals, and we might mix and match them as we see fit (e researcher designer engineer exists). Few of us are purely X, and we often do Y better than X.

But if you are doing X and are judged by someone else on the basis of Y, somebody is seriously wrong.

As stated before, everything seems to be art, the word unqualified is not meaningful. The art of science, the art of math, the art of philosophy, the art of formal verification and the art of genocide. Likewise, design is a similar broad word: the design of a proof, the design of an argument, the design of a massacre. But then a UX designer and a fine art painter are much more specific than those meanings.

Good vs Great

Good design gathers requirements and satisfy them. Great design innovates and exceeds expectations (which means it to some extent does not rely on requirements). To quote Henry Ford:
If I had asked people what they wanted, they would have said faster horses.
To return to art, it is not the creator who decides something is art, but the observer (or collector). I might be the only person in the world that considers my own scribblings as art even if I create them with intent, but many people agree Marcel Duchamp's "Fountain" is art even though it was mass produced in a factory. It is the displaying of something as art that makes it art, and therefore not everything is art. To put it another way, everything has the potential to be art, but it only becomes art if a significant number of people consider it to be art. New art movements can mean that what is not art today, may be considered art in the future, and vice-versa. Great art is only great because of a significant consensus. Even a fine art painter is only so because of consensus. I might consider myself the greatest fine-art painter in the world, but if nobody else recognises me as such it is clearly not the case. Therefore fine art is not an individual act of expression, but a collective act of observation and appreciation.

What is art?

A tricky question. Should we be discussing this here? An easier question.

Is PL Design Art?

Is PL design an art, or is a PL art? I think both of those are sufficiently in scope.

good PL done well is true

design.

I want to smash the notion

I want to smash the notion that PL design is somehow an en-devour of personal expression. It is just problem solving from a different perspective.

Personal expression? Seems

Personal expression? Seems rather doubtful imho that art is personal expression generally, either. There's a profound sense for the artist that they're channeling something outside themselves, call it "muse" or whatever other name you like. It seems reasonable to suppose the artist is (being coldly abstract about it) perceiving a pattern and expressing it, but if it's only in their head the result won't impress others as art.

People are formed by their

People are formed by their environment. Much art goes unappreciated since what is in the artist's head cannot be appreciated by anyone else. It is no less art even if it isn't popular. There are plenty of unsuccessful artists for every successful one we hear about.

This is why words are dangerous. One could say "that is great art" and have one meaning in mind, other people around them will mis-interpret what they say in ways they did not intend.

So...OOP's name-oriented meaning of abstractions is dangerous and we should all be doing functional programming where meaning is defined solely by value.

?!

> functional programming where meaning is defined solely by value.

I can't tell if that is tongue in cheek. I don't think that statement is really true at all. :-)

Yes. I love objects because

Yes.

I love objects because of their ability to attach vague poorly understood meanings to names (just like natural language). We can talk about problems without fully understanding them, allowing parts of the problem to be encapsulated or uncovered during the development process.

Functional programming in its purists sense, however, is all about truth. You can't lie or be vague about meaning...it is math after all!

Being functionally vague


a:*
x:a

or

function add( a:Int, b:Int ):Int {
return a * b;
}

QED

I don't get it.

I don't get it.

i lied

it was an example of How To Lie in Functional Code. :-)

I wasn't complaining about

I wasn't complaining about your initial post but rather that we seemed to be getting rather far afield. I agree with you about PX being important and design not being a matter of personal expression. I don't see the division between PL and PX as strongly as you do. To me, the language is the most important part of the PX. I find your trait inference stuff much more exciting than the real-time scrubbing. Good PL papers solve a PX problem that exists in the language.

Trait inference is just a

Trait inference is just a detail, and beyond its use in type inference, it is not something that I have connected very well to programming experience yet (hey, these objects can auto extend traits, but why would I do that beyond a few examples?).

The real-time scrubbing, on the other hand, represents a positive programming experience that PL abstractions (in addition to environments) should aim to support. It is is a problem that needs to be solved vs. a solution that needs a problem.

hey, these objects can auto

hey, these objects can auto extend traits, but why would I do that beyond a few examples?

Heh, well, yeah. I kind of assumed that you had started from the PX you wanted and worked backwards to typeless before you started building it. But what I mean is that's the kind of thing that I find interesting.

Scrubbing variables seems shallow to me. Magic constants are terrible even if you can scrub them. You don't want to add an incentive to use this anti-pattern.

You never start from the

You never actually start from the beginning or end, you start from the middle and go back and forth. Some aspects of typeless were meant for the experience, and some were designed for other experiences (code completion work in the maze of twisty classes), but my ideas on experience have evolved and I'm rethinking some of my past decisions. Mainly...see the last paragraph.

You can abstract those constants once you have set them to create relative relationships. See the later youtube videos in the APX thread.

And I think you've hit the nail on the head: scrubbing is only useful in certain contexts (simple abstractions like colors and shapes, constants with continuous meaning). How do we design PL abstractions that can be organized into a continuum so scrubbing would be more widely applicable? I don't think live programming will be useful until we can answer that question, and only by focusing on experience could we even discover that question!

Anyways, here was some feedback on Bret Victor's 2012 strangeloop talk:

Like many programmers, though, as I watched this talk, I occasionally wondered, "Sure, this works great if you creating art in Processing. What about when I'm writing a compiler? What should my editor do then?" Victor anticipated this question and pre-emptively answered it. Rather than asking, How does this scale to what I do?, we should turn the question inside out and ask, These are the design requirements for a good environment. How do we change programming to fit? 

That is not how the typical PL-focused researcher and practitioner thinks.

How do we design PL

How do we design PL abstractions that can be organized into a continuum so scrubbing would be more widely applicable?

This is exactly what interests me. I do a lot of exploratory programming, so I wish I could immediately see the ramifications of my choices as I made them.

Amen, but we still need some

Amen, but we still need some good ideas :)

For what it's worth, there

For what it's worth, there is a notion of continuity in intuitionistic type theory that generalizes continuity of functions of real numbers to functions of any data type: totality. It turns out that a total real function is always continuous.

Ceiling and floor

I guess that I've misunderstood what you've just written, but expressions built from ceiling and floor functions are total when defined over the reals, but discontinuous.

They aren't total in a

They aren't total in a constructive setting. What happens close to the discontinuity? You need to know an unbounded number of bits of the input number to determine a single bit of the output number.

Ah ha

Of course - so the functions are surjective but anything that requires equality is partial as it decides after an unbounded number of steps. That is interesting to learn, although I start to wonder if "intuitionistic" was chosen to be a deliberately ironic name...

Definition of intuitionistic

"In the end, everybody must understand for himself." -- Per Martin Löf

I'm not sure if that is

I'm not sure if that is related. The property I'm really interested in is "small change in input -> small change in output" where input is code and output is program execution. What that unfortunately means is that code must be the input of an interpreter that happens to be a continuous function...which seems quite unlikely for any code written in a (non-esoteric) PL that has supports abstraction (since using an abstraction is a small change in code that leads to a large change in behavior).

So we have to look for something else.

It is precisely a small

It is precisely a small change in the input -> small change in the output, but it only has value for infinite data (like real numbers), because small might still be arbitrarily big, just not infinitely big ;-)

Some things just are continuous in the sense that you want, like varying the position of a widget on the screen, and some things just aren't and there isn't anything you can do about it. So indeed, searching for a continuous language seems like a fruitless endeavour to me. We need something other than scrubbing to understand non-continuous functions.

An interesting related problem in this area is how do you debug a user interaction. Currently we change our program a bit, do the interaction manually (like filling a form, clicking button), chance the program a bit, and test the interaction all over again. This is tedious and slow. You currently have a nice way to get immediate feedback on code changes if the output of the program is purely programmatic, do you see a way to extend it to programs that have user interaction? Simply recording the mouse and keyboard events and replaying them doesn't work so well, since if a button is moved then the mouse clicks in the wrong position. So you want to record actions at a higher level than that, or possibly at multiple levels. Maybe a programming model with event streams where you transform a low level event stream into progressively higher level event streams would work well for this. I'm thinking of a test suite for an entire application with many interaction tests, and when you edit your code you want to see which interactions may have been broken. Maybe with forks in the interactions, so that you can have a test of actions 1,2,3 and then you continue with two alternatives 4a,5a,6a and 4b,5b,6b and you can simultaneously see the output of both 1,2,3,4a,5a,6a and 1,2,3,4b,5b,6b.

Right. Continuous functions

Right. Continuous functions aren't a perfect analogy. At the end of the day, we just need that the feedback be comprehensible and ideally hint about whether the programmer was getting closer to or farther away from a desired outcome. Though it is interesting to look at how deep learning trains neurons that really are small change based, but organized into layers for greater effect.

Event streams take control flow and masquerade it as data flow. Really, that just makes debugging harder: you have this stream that looks like data but is really not...ugh! There is a very good reason Elm and FRP don't have good debuggers.

Just bite the bullet and treat events like events. It is less elegant but more pragmatic; once you know you are dealing with control flow, it is much easier to observe and manipulate then if it is pretending to be data (at the cost of composability of course). APX already records events at the lowest level (e.g. mouse clicks), but since the higher level ones derive from lower levels, the moving button won't be clicked.

But I'm not really sure what higher level recording would give you...do folks really move their buttons so often while debugging?

Personal Expression

All sounds very platonic to me. There is some ideal programming language somewhere handed down, and we are just trying to discover it. I disagree with this idea.

Perfection is unobtainable, and there are conflicting desirable properties for programming languages (static or dynamic types, universal or parameterised types, functional or imperative). In none of these cases is one clearly better than the other. PL design is the art of compromise, and a personal selection (which is a form of personal expression) of these choices defines a programming language. The success of a programming language depends on how the choices made in the creation resonate with the wider programming community, exactly as it is with art.

The mutually supporting

The mutually supporting doctrines of unattainability of perfection and inevitability of compromise, though they may seem unassailable in theory, in practice can foster pernicious complacency. In information providing, for example, one may encounter the notion "objectivity is impossible", a plausible philosophical position, but easily abused as an excuse to not even try to be objective, leading to grossly biased reportage. As for compromise, I've found it more productive to study seemingly unavoidable trade-offs with an eye toward how those trade-offs can be avoided, often by changing the rules of the game. Often the things being traded off against each other aren't even a necessary dichotomy. The ones you name — static or dynamic types, universal or parameterized types, functional or imperative — all seem to me to be the result of assuming a conceptual framework that forces the choice, when a different way of thinking about things might lead to something that defies the distinction.

The Perfect Language

If the perfect programming language were created, the universe will probably self-destruct, and be replaced by a more complex and unpredictable one where it is even harder to define the perfect langauge. Given that this has probably happened several times already, I wish you luck on your Quixotic quest, you're going to need it :-)

Sounds like somebody's been

Sounds like somebody's been reading Douglas Adams.

Quest for the perfect langauge seems a bit of a charicature of my position. It's not clear to me what "the perfect language" is supposed to mean, which seems a prerequisite to taking a position on whether or not such a thing exists; nor, I think, would it be necessary to take a position at this early date even if one were clear on what it means. Gratuitous over-commitment. One of the questions that naturally occurred to me, for example, as I set about constructing a formal framework for abstractive power, was whether or not there exists a maximally abstractively powerful language. And that's barely scratching the surface of the vast realms to be explored. (For one thing, as I note in a blog post in-development, it's inherent in my current definition of "abstractive power" that it isn't always strictly a desirable property: abstractive power implies the ability to choose how powerful a successor language will be, and if were always desirable to have more power, why would it be desirable to have the power to choose to have less?)

Perfection vs Compromise.

Your point that things could be looked at in a way that removes the compromise suggests that all choices in PL design could be removed by adopting an alternative view. Therefore if we adopt the view that all choices disappear we have one perfect language suitable for all applications. The only other possibility is that some choices cannot be removed, and then which way you go depends on personal preference. If you are disagreeing with the existence of one perfect language, then you must be agreeing with me that PL design is the act of choosing different compromises, and what makes each PL different is those choices. Whatever is not perfection is therefore compromise.

that's not what I said

I warned against premature commitment to oversimplified doctrines, and I said I found it productive to look for ways to eliminate compromises. I don't know how those things could be portrayed as advocating premature commitment to an oversimplified doctrine that all compromises can be eliminated.

Fragmentation and interop costs

Keep in mind there are costs associated with integrating or even just switching languages. While I think the characterization "perfect language" isn't ever going to fit, I do hope we'll reach a point where it makes more sense to use a specialized library than switching to a specialized language.

I agree, but I don't see

I agree, but I don't see what this has to do with personal expression. A PL designer solves problems by adding and removing constraints.

You are still using a very broad definition of art, so yes, but it is not fine art that artists practice.

Today the term design is widely associated with the Applied arts as initiated by Raymond Loewy and teachings at the Bauhaus and Ulm School of Design (HfG Ulm) in Germany during the 20th Century.

The boundaries between art and design are blurred, largely due to a range of applications both for the term 'art' and the term 'design'. Applied arts has been used as an umbrella term to define fields of industrial design, graphic design, fashion design, etc. The term 'decorative arts' is a traditional term used in historical discourses to describe craft objects, and also sits within the umbrella of Applied arts. In graphic arts (2D image making that ranges from photography to illustration) the distinction is often made between fine art and commercial art, based on the context within which the work is produced and how it is traded.

To a degree, some methods for creating work, such as employing intuition, are shared across the disciplines within the Applied arts and Fine art. Mark Getlein suggests the principles of design are "almost instinctive", "built-in", "natural", and part of "our sense of 'rightness'."[33] However, the intended application and context of the resulting works will vary greatly.

Art is a distraction

These aren't mutually exclusive things, so whether or not parts of what Sean talks about can also classify as an art seems like a red herring. There are scientific, design, and engineering sides to what (I believe) he is describing as PX.

Personal Expression vs Design by Committtee

Certainly some PX design might be worthy of being called art, but not all, so I agree this is not the whole story.

I do think a lot of it is personal expression. I don't like some of the things you want to include in your system. That doesn't make you wrong, you are expressing your personal design choices. The language I create might be very different, and you might not agree with my design choices, but that does not make me wrong. We can only be true to our own personal design aesthetic. Success will depend on how many people share our aesthetics.

So I think it is all about personal expression, although we might try and argue that the choices we make are rational, its mostly instinct with post-rationalisation, except where proper empirical trials are used.

Anecdotally things designed by a committee have a reputation for bad design. If design were purely a rational choice process we would expect the more people involved in the design, the better it would be. However many of the systems with the clearest vision are designed by a single person. This suggests that personal expression is important in defining good systems.

All major products have

All major products have multiple designers working on them, these designers have to work together with each other, the developers, and product managers. It might be personal whether you like the product or not, but a good product is designed to sell to as many consumers as possible (with knowledge that it won't suit everyone).

John Ives doesn't do the design for the iPhone all by himself, nor does is it the product of his person. No, the iPhone is done by a team, they throw ideas around, figure out which ones are the best, do some testing, some review, wash rinse and repeat. They do think about the "person" and do apply creativity to problem solving, but its about the consumer, not themselves.

A designer working alone has more freedom of course, but there is also more risk into creating a design that is about personal expression. The best designers are self-critical, and avoid doing that, focusing instead on solving the problems at hand.

If you want to express yourself, become an artist, not a designer.

Microsoft vs Apple

So how come Microsoft didn't invent the iphone? From what you say, any company should be able to simply gather the requirements and design. You are back to Henry Ford's faster horse again. The innovation at apple was not replicated elsewhere, so it must have been a result of the people involved. It is a result of their personal expression, and I bet there were less people involved in the initial vision than you suggest. Change the people and you would get a different product.

Apple made a great timely

Apple made a great timely call on when capacitive touch was ready for mainstream use, one that Microsoft missed. Apple doesn't innovate so much as they execute very well. That is not artistic expression, that is just being smart, clever, doing your homework, and taking the right risks. All companies can do that, most just fail at it.

The iPhone is not a work of art. It is a well designed phone, where good design here is a part of good execution that also includes good logistics, engineering, marketing, and so on. Yes, you need good people to execute well, bad people won't. But there is no room individual personality or ego in that.

Steve Jobs

I think Steve Jobs might have disagreed with that :-) In any case it appears you don't value individual contribution in design, so I don't think I would want to work for you (in-before: I don't think you would want me working for you).

Indvidual contribution !=

Indvidual contribution != individual expression. Steve jobs was a perfectionist, he was critical, but the iPhone does not embody his ego, frankly I think he might be insulted by such an accusation.

Criticism is a mainstay of the design process, that is much harder to do if personal expression is involved.

Differentiating

I don't think personal expression, nor art necessarily involves ego. Some Japanese art is expressive but the ideal is egoless, art is generated in the 'flow' state. I have not mentioned ego at all, so what you say about Steve Jobs is not incompatible with what I said. It's like I am taking about colour, and you keep trying to complain about how it sounds. So I agree with you about not needing ego getting in the way, yet I think personal expression is important. The best designers I have worked with have all had an 'eye' for good design, not something you can teach, but an aesthetic talent.

I bring up ego using its

I bring up ego using its literal meaning. The designed artifact is not supposed to reflect on a person, it should be anonymous. Because the designer has an "eye" for good design has nothing to do with artistic personal expression. And plenty of them went to design school and practiced for 10,000 hours to develop that eye; it is as intrinsic as programming is.

Anyways we should really close out this conversationt; whatever terms you want to use, just be aware that designers are first and foremost professionals who focus on solving problems just like we do. They aren't artists, or are at least aren't acting as artists when doing design...they are solving problems.

The space between requirements.

So to close the conversation, lets try to define the common ground we do agree on: Good design meets all the provided requirements, and tries to score highly on objective usability metrics, but I am saying that does not completely specify the design. As the design space is continuous there are still an infinite variety of designs that meet the requirements and score highly on the metrics, and that is where the personal expression comes into it. I think we can agree that personal expression should not override the requirements and metrics in design, but I think there is still plenty of room for personal expression after all the design requirements have been met.

Edit: this is interesting http://priceonomics.com/the-correlation-between-arts-and-crafts-and-a/

As you say, there is room

As you say, there is room for personal expression, just like there is room for personal expression in anything. But art is very personal, email user interfaces and elegant Haskell code aren't.

Let's put it this way: in a design review, if someone says "this artifact has too much personal expression", they don't meant that as a good thing. A designer must be prepared to justify all decisions in the context of the problems being solved (e.g., what problem is that cute flower solving?). What we see as aesthetics is often providing affordance, being legible and consistent, and yes...branding a visual identity (for the app, company, whatever). There are just so many constraints in visual design, which is why programmers often do it poorly (I thought that looked pretty...).

In PL and PX design, this is especially true. Part of the criticism we often get is that design is art and art isn't worthy of formal study. But design is problem solving just like engineering, and it is completely worthwhile to study it.

Of the various meanings one

Of the various meanings one could choose to assign to the word "art", I don't see that "personal expression" is an important or useful enough meaning to deserve having the prime-real-estate word "art" allocated to it. If any random lunatic can do something and lay as much claim to that being "art" as the output of JS Bach, da Vinci, or Picasso, then why have a word for it at all? Successful art in the sense I find of interest is a tapping into universal principles that can then be successfully recognized by others; it's almost the opposite of self-expression. I suggest it's not possible to do anything truly exquisitely well (be it painting, or dancing, or pole vaulting, or operating a backhoe, or programming) without this tapping into something that, from the inside, feels external, whether you call it The Muse or The Force or whatever. Seems to me this tapping into universals is key to great design as to great achievement elsewhere.

Your art has to come from

Your art has to come from somewhere. You are influenced by the universe, and you express that influence.

There are many thoughts on this; e.g.

Now, it is my understanding that design in the commercial sense is a very calculated and defined process; it is discussed amongst a group and implemented taking careful steps to make sure the objectives of the project are met. A designer is similar to an engineer in that respect and must not only have an eye for color and style but must adhere to very intricate functional details that will meet the objectives of the project. The word “design” lends itself to a hint that someone or something has carefully created this “thing” and much planning and thought has been executed to produce the imagery or materials used for the project.

On the other hand, art is something completely separate—any good artist should convey a message or inspire an emotion it doesn't have to adhere to any specific rules, the artist is creating his own rules. Art is something that can elicit a single thought or feeling such as simplicity or strength, love or pain and the composition simply flows from the hand of the artist. The artist is free to express themselves in any medium and color scheme, using any number of methods to convey their message. No artist ever has to explain why they did something a certain way other than that this is what they felt would best portray the feeling or emotion or message.

(http://www.aiga.org/art-vs-design/)

Maybe personal expression isn't a good word for it. Maybe it is more like communication, but it definitely isn't problem solving.

Code, Happiness, and Learning the Language.

I certainly feel Haskell code I write is a personal expression. Nobody else would write that exact code. The code represents my thinking written down, like a mathematician's proof, or an author's novel. There are sufficiently many ways code can be written, even when solving a given task, and complying with style guides that it represents original thought (edit: hence it can be copyrighted). Of course we all reach for standard implementations of well known algorithms, but I never cut-and-paste code from other people. Given a slightly broader task implementing an api or whole subsystem there is much more scope for personal expression.

I think an important variable is finding the right person. A person who loves drawing flowers is never going to be happy designing graphics for a defence company. Perhaps the secret of happiness is to find a job where your personal expression matches the corporate values, then there is no difference between your expression and the corporate brand, and consistent design meeting the requirements comes naturally. If a job is too constraining of your personal expression you will find it very tiring, and probably move on quickly.

In any case, I am not suggesting design is not worthy of study. Design is very much like a language, and a bit like if your form of personal expression us writing detective stories in French, you have to be fluent in French first.

from systems to languages

Richard Gabriel [1] explains that the PL community stopped studying programming systems (like, I'd say, Lisp, Smalltalk) and switched to studying programming languages.

That shift coincides with:

  • Mass production of generic hardware leap-frogging well past the performance per dollar of specialized hardware. (This was largely a development of chip fabrication and the near future may very well reverse this trend. The death of Perqs and lisp machines and Wirth's weird modula machines, and Xerox Stars.)
  • Visicalc, Wordstar, and some very early games. (That is to say, that atomization of "applications" into mass-market commodities. That new commodity form radically shifted "where the money is" in systems software, including programming languages. It changed who the "customer" is. Indeed, it created a "customer" where, prior, there largely was not a custome. Before (roughly) the 1970s and 1980s, programming languages were developed largely by a patronage system. Since then, programming language design is mostly for people who sell application software.)
  • CP/M, PC-DOS, and UNIX(tm) in the context of micro-computers and workstations. The chip economics of generic hardware severed the formerly, approximate 1:1 relation between hardware vendors and operating systems. Generic OS's gained traction alongside generic hardware. The "whole system" concept of (e.g.) lisp machines and Smalltalk lost relevance.

Psychology of Programming

I'm surprised PPIG, the Psychology of Programming Interest Group, hasn't come up in this discussion. Or PLATEAU, Workshop on Evaluation and Usability of Programming Languages and Tools (held as part of SPLASH) -- its focus is precisely the programming experience, as affected by the combination of the language and available tools.

This is a good point. Mostly

This is a good point. Mostly what PPIG and PLATEAU are doing is trying to bring HCI into PL. But PL alone is not very interesting in that context, so they've necessarily included entire programming experiences. However, PPIG/PLATEAU do not focus precisely on programming experiences, just the HCI subset. You don't go to these workshops to talk about new programming experiences, rather you go there to talk about studying and evaluating them.

So there is an interesting relationship between HCI, design, and invention/innovation. HCI is basically study/evaluation of how people interact with computers, but that alone is boring, so conferences like CHI accept a lot of new experience papers with tacked on study/evaluation sections (otherwise the conference would be pretty small). UIST goes even further and focuses on the new experiences purely. PPIG/PLATEAU are workshops are small enough to specialize in the HCI of programming experiences, BUT it just happens that there is no corresponding venue for inventive/innovative programming experiences beyond the PL conferences that often don't know what to do with them.

So if you want to communicate/talk about new programming experiences, you can definitely try your luck with a PL conference, but you might do better to just try UIST or CHI, or even ICSE. No one owns PX, everyone owns PX, and PX is niche in all those venues. Or you can try a specialized workshop that pops up from time to time (Formal IDE, PLIDE), or you can just go to Strangeloop.

How should PX be evaluated?

In PL and PX design, this is especially true. Part of the criticism we often get is that design is art and art isn't worthy of formal study. But design is problem solving just like engineering, and it is completely worthwhile to study it.

It may be time to jump to a slightly different aspect of the discussion that I think would be interested: what is a good way to evaluate design?

For some aspects of programming language work, being able to prove certain theorems about a system is a way to validate the work done. For some other aspects, benchmarks are expected. Reviewers know how to judge the quality of works using these as evaluations. On the contrary, they sometimes report that work identified as "design work", where the contribution is often of the form of a particular set of compromises navigating inside a large choice space.

Since the focus is on the

Since the focus is on the programmer's interaction with the programming system, evaluation has to be based around the performance of that interaction. In that sense, its just like HCI except more specialized (human -> programmer, computer -> programming system). Of course, human being behavior is incredibly hard to evaluate. Just look at the evaluation of some CHI papers to appreciate that difficulty. So there has been a push to empirical user studies in PL, but we should be careful:

Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical part of every design process. This is for good reason: usability evaluation has a significant role to play when conditions warrant it. Yet evaluation can be ineffective and even harmful if naively done ‘by rule’ rather than ‘by thought’. If done during early stage design, it can mute creative ideas that do not conform to current interface norms. If done to test radical innovations, the many interface issues that would likely arise from an immature technology can quash what could have been an inspired vision. If done to validate an academic prototype, it may incorrectly suggest a design’s scientific worthiness rather than offer a meaningful critique of how it would be adopted and used in everyday practice. If done without regard to how cultures adopt technology over time, then today's reluctant reactions by users will forestall tomorrow's eager acceptance. The choice of evaluation methodology – if any – must arise from and be appropriate for the actual problem or research question under consideration.

(Usability evaluation considered harmful - Greenberg/Buxton)

Perhaps PL's move away from programming systems in the late 90's corresponded to its move toward increased rigor, and since human efficiency is so difficult to measure, program systems work necessarily became very unpopular (you could still do it, but then you'd be stuck evaluating it if you wanted to publish anything about it!). If you want to publish, better make sure the path to evaluation is not difficult; that is not a unreasonable for an academic but can lead to a navel-gazing community that isn't really supporting the field.

But you asked a more specific question: not how to evaluate PX, but to evaluate PX design? Academia doesn't evaluate design since its goal is knowledge and not producing artifacts (just like HCI isn't exactly design either). It is not really clear how to evaluate design even in HCI; e.g.

We argue that HCI has emerged as a design-oriented field of research, directed at large towards innovation, design, and construction of new kinds of information and interaction technology. But the understanding of such an attitude to research in terms of philosophical, theoretical, and methodological underpinnings seems however relatively poor within the field. This paper intends to specifically address what design ‘is’ and how it is related to HCI. First, three candidate accounts from design theory of what design ‘is’ are introduced; the conservative, the romantic, and the pragmatic. By examining the role of sketching in design, it is found that the designer becomes involved in a necessary dialogue, from which the design problem and its solution are worked out simultaneously as a closely coupled pair. In conclusion, it is proposed that we need to acknowledge, first, the role of design in HCI conduct, and second, the difference between the knowledge-generating Design-oriented Research and the artifact-generating conduct of Research-oriented Design.

(Design-oriented human-computer interaction - Fallman)

Personally, I think what we usually call design is not really, but rather inventions. Good PL design isn't very publication worthy anyways: you solve problems with the best solution, which is probably not going to be very inventive (someone invented the solution, you just need to connect the dots). Design is more innovation (as Leo would say, turning ideas into money), rather than invention (turning money into ideas).

So really we might want to ask is how do we evaluate new inventions?

Another question

Are proofs and benchmarks really good ways to evaluate papers?

Hm

I think they are reasonably effective, do you have some specific criticism in mind?

The evaluation depends on

The evaluation depends on the problem being solved by the technique. Sometimes they are appropriate, sometimes they aren't and it's just ceremony since there is no other reasonable evaluation to an otherwise reasonable idea/paper.

Ok

Do you have specific examples of this "just ceremony" thing? I see people saying that from times to times, but I have never had this impression myself so I'm curious.

I'm sure that these tools of evaluation are not appropriate for all forms of research -- I of course agree that it depends on the problem at hand. I had the impression, though, that Jules' question was hinting at some more fundamental objection that I don't know about.

I don't have a fundamental

I don't have a fundamental objection to proofs or benchmarks, but shouldn't they be a small part of the evaluation? The quality of a paper depends on the quality of the problem it solves and the quality of the solution it presents. Proofs and benchmarks don't really mean much in regard to the quality of the problem, and are often only tenuously connected with the quality of the solution. If you have a paper about a type system for checking property X then first of all is property X important, second is the type system actually expressive and usable enough for checking X. The soundness of the type system seems to me a very low bar, and the actual contribution is the type system and not the soundness proof. So if indeed a type system should be evaluated on what problem it solves and how well it solves that problem, and if the soundness proof is only a small part of how well it solves that problem, why can't design work be evaluated in much the same way? There isn't a soundness proof, but that was a relatively unimportant evaluation criterion anyway. The important criteria, namely what problem it solves and how well it solves the problem, are still there.

"Philosophical evaluation"

Good venues pay some attention to those questions as well — you need to convince reviewers that the problem is relevant and your solution expressive. But typically you simply argue relevance. To show that you're expressive in principle, you often use examples or case studies and argue they're worthwhile. But sometimes you simply show that some established and expressive language can be encoded in yours easily enough.

It's easy to fail to convince reviewers.

At the same time, such arguments at most establish plausible relevance, which might turn out to be wrong. I can think of two examples:
1) AOP. I buy the problem, but AspectJ never convinced me fully. I guess many had similar objections, but it took at least a decade and several papers to "prove" them. This is really a worrisome story.
2) In ML there was some research on a strange language feature (weak type variables) because of a potential problem — some restrictions that on their face look really annoying. The problem appeared annoying enough to motivate research, but after some years with people using the language, somebody could verify that the problem arose very seldom, so the feature was dropped.

Yet, it's often hard to do better. I'll admit there is an imbalance in research, but there is a place for PL theory that can only be consumed by PL theorist, and how else could you evaluate whether it's relevant? Even C# is based on what could look like abstruse research.

diversity and balance

There is room for all kinds of research, but when a few signals tend to dominate the conversation of a community while there is a need for other signals, then its time to figure out how to better talk about those other signals.

It is not even about convincing reviewers; many papers that are accepted to conferences still fail to find viable audiences within those communities even if they find plenty of audience on the outside. At that point, you are probably just in the wrong community.

Would it really be

Would it really be appropriate to point out specific papers? They are all over the place, and *cough* my name is on at least a couple of them.

SE

In the top post you mentioned SE alongside HCI as a target community for PX results. There are a few branches above that mention this, but it seems worth pointing that as the discussion has progressed towards system-level design as a high-level macro view of programming languages it does sound like something that the SE community is still very interested in.

Within SE it is possible (and indeed common) to take a qualitative as well as a quantitative view of results. The methodology does not entirely overlap with PL venues as the focus is more heavily on the human side side: the "experience" part. There is quite a narrow focus on productivity as the main evaluation of experience, but at least there is a broad view of what productivity is as it remains a quantity that cannot be directly measured.

There seem to be quite high walls between the SE community and the HCI community. I find this strange as looking from a distance it seems to me that they both do the same thing: take one aspect of human/machine interaction and attempt to explore the design space around it. In the case of HCI it made more explicit that the interface is human/machine by looking at the design of the hardware/software in the explicit interface. Most of SE is then wrapped up in one particular textual interface, but it still explores the design space for the tooling around that interface.

Yesterday the Monolithic respository talk from @scale hit reddit so I guess you've already seen it. It struck me as an incredibly interesting talk, despite my natural aversion to process-heavy talks. Language agnostic source control does not fit directly inside PL, but it is somewhat hard to discuss any application of PL at scale without mention of the communication between programmers that source control provides. Dependency management, interface to the other surrounding tools and of course the versioning problem for libraries all strike me as what you are describing as PX. I wondered what your thoughts on the talk were and where it lies relative to PX?

"How we ended up with microservices"

I haven't had the time to watch this talk yet (and will probably never have, the video format is the problem; are slides available somewhere?), but from the title and description it seems that you may be interested in How we ended up with microservices, a recent read which I also found very interesting -- and process-heavy.

Interesting read

Thanks for the link. I couldn't find the slides online - shame as the talk was interesting but it is rare to find an hour to watch a video. A common theme between both works was trunk based development, which seems like an interesting way to tame version dependencies. The description of Sound Cloud reminded by of The Deadline by Tom DeMarco - quite a fun novel exploring the impact of different approaches to process.

I managed to watch this talk

I managed to watch this talk today while cooking and doing the dishes :) Here is a summary:

Google uses one version control repository for all its code, and they don't do branches (all code gets committed to a single trunk). With multiple repos you tend to get library code duplicated in multiple repos and that gets out of sync, and you have very painful merges. At Google there is a single source of truth for the entire Google codebase at every point in time. This also avoids dependency hell because there is only a single version of each library at any point in time. A single repository allows all developers to edit any code in principle, which makes the logistics of moving people to a different team easy. A single repository allows making an atomic change across the entire codebase, e.g. renaming a function in a basic library that is used everywhere. This model allows easy code reuse: you can easily call into any library in the entire codebase.

Disadvantages of this model are that you need a special version control system to handle the size and commit volume at Google's scale. They made their own filesystem so that you can logically have the entire Google codebase on your computer, but it's lazily loading through the version control system in the background. Secondly, this model can encourage too much code reuse, which is a bad thing because it creates a complicated dependency network, and people can depend on private functions in libraries. They fixed this by layering their own visibility system on top, so that you cannot create a dependency on a random file in another library; you can only depend on its public interface.

I hope that is a semi accurate summary, but for sure I missed a lot of points made in the talk.

This sounds like a very sane model, but it looks like language support for proper modules would help a lot to avoid the problems they have. In any case far saner than microservices, which turn a modularity problem into a distributed systems problem. When you split a program into two services, you now have the following problems when service 1 wants to use service 2:

  • Concurrency issues, since the services are running in parallel.
  • Distributed failure modes.
  • Protocols & serialization/deserialization.

All that just because you wanted to make the program modular. A better option seems to me a module system with a typed interface instead of protocols with serialization/deserialization, without necessarily making the two sides concurrent, and without necessarily introducing distributed failure modes on each interaction.

This is the Google tool

This is the Google tool story that we've known for a few years now. It doesn't really come from a SE perspective, but a systems one. I've been trying to push a broader version of it called code wiki over the years (a programming experience needs one shared open namespace, yeh!).

The second point is not really google's invention, but amazon's. The concurrency is a pain, but the big problems turn out to be logistical and organizational, which this model solves.

So microservices are actors

Facepalm.

We've finally found the level of granularity where actually implementing something as an actor is a win, (implementing, not just modeling).

Hewitt may be disappointed, these are large pieces.

i guess you could say they

i guess you could say they are actors, but I don't think that is what is important in this context. I suggest taking a look at Michael Church's famous rogue G+ essay.

If the only dimension one can see is PL, the success and failure of these things would look quite mysterious.

Couldn't find the essay

Searching "g+" doesn't find it
searching google+ just finds google plus pages
searching that on his site just gives false positives.

I'm in mobile now and can't

I'm in mobile now and can't find it. It was big news sometime back. I'll try to find it when I'm on a laptop.

Edit: I got the person wrong, it was Stevey Yegg, the famous rant is at here.

big scale is a big b*tch

So your day job is being a salesperson for some Big Iron? :-) Now, I guess there are ways to get the advantages of being distributed over cheap hardware that are different and hopefully more sane better, but the fact is that eventually somewhere in the stack one might really want to go distributed.

Scaling is a different

Scaling is a different problem than modularity. Distributed systems can be a good solution in that case, but big iron can be too. The microservices hype is mostly coming out of web development. Hardware is so fast these days that a lot of companies would be fine running their website on a single core. IIRC you can buy a server with 1000GB of RAM and 64 cores for something like $50k, not really more expensive than buying 16x a 4 core server with 62.5GB RAM each. Only a handful websites in the world that need more than that. Making a distributed system run faster than a single core turns out to be really hard.

And scaling is a different problem than modularity...of course you have to go distributed at a certain scale, but willingly opting in to that when you only needed modularity is something else.

fail over

This is getting OT, apologies. But distribution purportedly brings other benefits, like redundancy and other flexibilities. Doing everything in one core on one machine can easily lead to ossification. etc.

terminology with finer grain might help

It's hard to pick a spot in this sub-thread for a reply. Yours is a good prompt. :-)

The microservices hype is mostly coming out of web development.

So if you say that word, people think you mean a web api, making it hard to talk about that same idea applied to other sorts of protocol and transport, including every other way to message local and remote processes. The idea is fine-grained service used to compose a larger service.

Suppose the smallest service you can arrange is a nano service, perhaps because nanosecond scale latency is possible (i.e. sub-microsecond), but not typical outside unusual local best cases. Call each mechanism you use an instance of nt for nano tool where tool is deliberately vague and internal, whereas service is an outside POV term. If it acts like an actor, tack on an alpha (α), and if it acts like a process, tack on a pi (π).You might tack on a w if a nano tool has a web api too. If you try to use different glyphs for each sort of possible nuance, you get a mess of course, but perhaps this is better than not being able to talk about the nuances at all. Attempts to apply logical positivism in terminology usually get weird eventually, so keeping it hand-wavy is less distressing.

You could expose a nano service with a web api and URIs for resources, or just access everything privately instead, which is more than norm since http is rather heavyweight for things that message with high frequency.

This sounds like a very sane

This sounds like a very sane model, but it looks like language support for proper modules would help a lot to avoid the problems they have. In any case far saner than microservices, which turn a modularity problem into a distributed systems problem.

I'm torn on this issue. In principle, I thought much as you express here, but as I'm now going through the process of refactoring some tightly coupled legacy code, it makes me truly appreciate the enforced modularity of microservices. It's simply too easy to cheat unless your language is good at enforcing proper modularity, which few are.

That's interesting. What

That's interesting. What about ML style modules, where you package up the part that you would turn into a microservice in a module, compiled separately, which you can only interact with via its specified interface? I guess you can do roughly the same in OO languages, but you need encodings and enough discipline which doesn't really work in practice :) On the other hand look at what people are doing with microservices to implement e.g. a callback. With modules you would just pass a function, but with microservices you need some kind of callback mechanism, which isn't fun at all with the concurrency and distributed failure modes. It's not clear to me that this is any easier than manually enforcing abstraction boundaries in languages that lack enough support for it.

Private APIs

There was some discussion of this (2/3s through the talk?) and the solution that they use is private APIs. The default status of an API is private: it wasn't discussed what this means in terms of discovery, perhaps they are still publicly listed in a catalogue or they use out-of-band means to communicate what is available.

To use an API the client has to subscribe to the API, this is agreed with the provider (library owner). The API provider is then responsible for not breaking clients during updates of either the library or API: there is a formal link in the system to denote usage. This is integrated with automatic testing of commits and code reviews. To make this sane the clients have full access to the code behind the API (through the giant file-system) and can push changes upstream to the library owner for integration.

As I understood it this breaks the strict owner/client model of libraries and each of the users of the library act as peers to propagate changes and their effects. There is still an abstraction boundary in the system, but they have altered both parties relationship to it so that changes are atomic entities that cross the boundary.

On the other hand look at

On the other hand look at what people are doing with microservices to implement e.g. a callback. With modules you would just pass a function, but with microservices you need some kind of callback mechanism, which isn't fun at all with the concurrency and distributed failure modes.

True, but if you organize your program around idempotent continuations, it's not too bad; this is what you should be doing anyway, so the incentives align. I think there are some natural functionality boundaries within some monolithic programs where microservices can fit, but I agree that the hype is mostly surrounding services written in dynamically typed languages where enforcing modularity is either difficult or impossible.

this is what you should be

this is what you should be doing anyway

Because programming really should be that hard?

I agree it has to be this way now, but the stilted interactions that result are hardly what we should be aiming for (and its perfectly understandable why people re-invent callbacks).

What are idempotent continuations

?

Continuations whose

Continuations whose invocation is idempotent. It's the simplest way to ensure robust operation in the presence of network partitions.

Idempotency is necessary but

Idempotency is necessary but not sufficient. You need time-outs, retry logic, and if it still doesn't work you need code for handling that failure condition. If you have multiple things going on at the same time you need distributed synchronization, which in difficult cases may end up with something like paxos. Distributed systems are hard.

I'm not sure I agree on

I'm not sure I agree on embedding timeouts within program logic. It introduces a new source of non-determinism, and it's unnecessary given idempotency.

Aborts and retries should be driven by interactive program input. This ultimately bottoms out at non-deterministic users, which ought to be the only source of this sort of non-determinism.

That can work in many cases,

That can work in many cases, but you have to be careful. What if you want to do two things, and the second thing keeps failing? You still did the other thing. Do you want to roll that back? Maybe. How do you go about making operations idempotent? Do attach a unique ID to each operation and log somewhere that it happened so that you can ignore it the second time around? How do you garbage collect that? Or do you modify the semantics of each operation?

Idempotency isn't necessary.

Idempotency isn't necessary. You can log, rollback, reconcile, etc...I would say idempotency only solves a small part of the problem in a way that costs too much in terms of usability.

do tell

Did you mean usability for the programmer, or the end user? I'm too ignorant and inexperienced to be able to follow this sub thread well, but I would very much like to learn from it. Can we posit some stages of evolution, much like the rightly much aligned linear picture people draw of evolution up to homo sapiens sapiens? Would idempotent continuations be a decent baby step along the way? I feel like my little brain might utterly fail to make a good system if I had to do e.g. reconciliations (even though e.g. that's how banks do it all today).

Making the programmer worry

Making the programmer worry about idempotency themselves just means you expect them to do all the work in what is probably not a naturally idempotent operation. So they split their computation up into a bunch of idempotent operations, using continuations or whatever bizarre magic is available, and ultimately get something that is very far away form what they wanted to do in the first place. Any solution that involves manually breaking up a computation via continuations is a big red flag, in my opinion.

Any side-effecting operation

Any side-effecting operation can be made idempotent simply by binding it to a promise. Replay until you receive confirmation that the promise settled. There's nothing magical about it.

You still have that

You still have that fragmented control flow that makes debugging super difficult, you still have to retry manually. Promises are a great trick of syntax, but they are just not very usable. It's a hack that works but is not very satisfying.

[...] you still have to

[...] you still have to retry manually.

I'm not sure what promise/future-model you're envisioning, but this isn't a property of the one I'm familiar with. Perhaps you're assuming some sort of promise resolution timeout that clients must handle themselves, but the Waterken server demonstrates that it isn't necessary to foist this on clients; it's better to put the semantics in the runtime where it's handled properly for every program. Waterken closely matches the E programming language's semantics, where promises are persisted to the object store and retries are transparent.

Fragmented control-flow is maybe a fair charge, but I don't think it's all that bad, as long as you capture enough of the original context to generate a proper error.

therein lies a rub

Getting really good debugability is apparently something we rarely know how to build into our systems. I kind of wish somebody would come at making a new programming language purely from the vantage point of making it like falling-off-a-log easy to debug. Unfortunately the difficulty of achieving that seems to me to go hand in hand with, "all models for concurrency suck".

(I guess one gordian knot solution I'd like to see investigated would be to just bloody well record *everything*. And have tokens to join up the splits so that when I get a callback on another thread, I can figure out all the way back to what originated it, 5 hours ago and long gone. And pull that stack off the logs and see wtf. Who cares if it is 10x/100x slower, for research purposes. Throw it onto big iron and gigabit networks and SSDs. It will end up being the most debuggable distributed C64 ever.)

You know the Eidetic

You know the Eidetic Systems?

http://muratbuffalo.blogspot.de/2015/03/eidetic-systems.html
https://www.usenix.org/conference/osdi14/technical-sessions/presentation/devecsery

Abstract:

The vast majority of state produced by a typical computer is generated, consumed, then lost forever. We argue that a computer system should instead provide the ability to recall any past state that existed on the computer, and further, that it should be able to provide the lineage of any byte in a current or past state. We call a system with this ability an eidetic computer system. To preserve all prior state efficiently, we observe and leverage the synergy between deterministic replay and information flow. By dividing the system into groups of replaying processes and tracking dependencies among groups, we enable the analysis of information flow among groups, make it possible for one group to regenerate the data needed by another, and permit the replay of subsets of processes rather than of the entire system.We use modelbased compression and deduplicated file recording to reduce the space overhead of deterministic replay. We also develop a variety of linkage functions to analyze the lineage of state, and we apply these functions via retrospective binary analysis. In this paper we present Arnold, the first practical eidetic computing platform. Preliminary data from several weeks of continuous use on our workstations shows that Arnold’s storage requirements for 4 or more years of usage can be satisfied by adding a 4 TB hard drive to the system.1 Further, the performance overhead on almost all workloads we measured was under 8%. We show that Arnold can reconstruct prior state and answer lineage queries, including backward queries (on what did this item depend?) and forward queries (what other state did this item affect?).
--

Software: https://github.com/endplay/omniplay

Very interesting paper.

Thanks for posting it. It's rare to see such a novel idea developed so thoroughly in a single piece of work.

Sounds like "event sourcing"

Sounds like "event sourcing" on steroids; also like deriving a purely functional semantics from an imperative semantics without too significant an overhead. Interesting.

The Ken family of protocols

By the way, I just came across these slides which briefly discuss the Ken family of protocols that provide error recovery without the domino effect, of which Waterken I describe above is now but one example. They've been busy, because they have C-ken implementation for C, MaceKen for ther C++ distributed systems toolkit, V8ken for the the V8 JS VM they hope to eventually evolve into NodeKen based on Node.js, and SchemeKen.

Overall the slides cover a variety of error recovery mechanisms from the 60s onwards, and their associated pitfalls, eventually settling on an interesting Prolog-inspired trailing mechanism. When combined with event loops as found in the E language and Node.js, and the write barriers GCs already use, it allegedly wouldn't introduce much overhead, yet provide simple recovery in the face of distributed faults.

but but

If it reveals the underlying long-term difficulties in getting the thing *right* then I think that can pedagogically be a *good* thing. Just because something is easy or hard, doesn't directly map to it being right or wrong. Often quite the opposite is true in software development, it seems to me.

Of course, I do not want things to be harder or obtuser or whateverer than they need to be.

In the long run, I am always hoping that functional style will come full circle and let us do things the "obvious" way and yet have it not suck so much to debug and maintain. i.e. the Haskell is the finest imperative programming language evar kind of perspective.

We are making improvements

We are making improvements in the wrong direction, optimizing for metrics that we shouldn't be optimizing for. Promises will never be easy, we can give dress them up with better syntax and whatever, but their scattered operational semantics are unshakable...just like lazy evaluation. There is no circle here, it is primarily a prescription for a bitter medicine of "eat you vegetables, harder is good for you, suck it up" that trades off usability via clever solutions for elegance.

Idempotence isn't necessary

I agree that idempotence isn't required for all networking models. It is important for some networking architectures, though, and I think the language should let you spell out what your architecture is and help ensure that you have that property when it's the basis of the correctness of your networking code.

Anyone can use

I think the point of microservices is to provide a remotely connectable API with authentication such that third parties can integrate with your services over the internet. Amazon is the classic example. All Amazon systems are implemented as services, and even their own internal software uses the same API that we can connect to. Microservices are about having network connected APIs and no private APIs. This means you have to deal with the authentication issue too, as requests can be coming from anywhere not just trusted code or machines.

Edit: so I'm saying its not just about continuations (idempotent or otherwise), but about network remote call semantics and pervasive authorisation control. You cannot rely on having the address of a call to give you permission to use it, as all call addresses would be published.

This isn't a new idea, from

This isn't a new idea, from The Prospects for Psychological Science in Human-Computer Interaction:

A third example comes from the computer-science world: Millions for compilers but hardly a penny for understanding human programming language use. Now, programming languages are obviously symmetrical, the computer on one side, the programmer on the other. In an appropriate science of computer languages, one would expect that half the effort would be on the computer side, understanding how to translate PROSPECTS FOR PSYCHOLOGICAL SCIENCE 213 the languages into executable form, and half on the human side, understanding how to design languages that are easy or productive to use. Yet we do not even have an enumeration of all of the psychological functions programming languages serve for the user. Of course, there is lots of programming language design, but it comes from computer scientists. And though technical papers on languages contain many appeals to ease of use and learning, they patently contain almost no psychological evidence nor any appeal to psychological science. Some research on the human side does exist, of course. But imagine a scale with Shneiderman's Sofware Psychology (1980) and an armload of books weighed against the two volumes of Aho and Ullman (1972) and the library of books on compiler construction, parsing, program specification, correctness proofs, denotational semantics, applicative languages, LR(k) grammars, and structured programming. Relative to what is found on the human side, the technology of programming languages is technical and deep - all those algorithms to know, all those theorems on syntactic language classes, all those operations on data structures. There are also things to know with respect to the psychology of programming languages- but far fewer and much of that consists of the details of idiosyncratic experiments. The human and computer parts of programming languages have developed in radical asymmetry (Card & Newell, 1984). This comparison does not imply that knowledge about the human is less useful than knowledge about compilers, it just shows the operation of Gresham's law. Interestingly, the technology of programming languages has its own ways of diverting attention from the missing human part of the enterprise. Currently, the panacea in the programming world is "rapid prototyping." The idea is that if a designer has sufficiently rapid feedback from what happens when a user uses a program, nothing else is required. Rapid prototyping thus bypasses the need to know anything about the human. In other words, hard science, in the form of computer technology, drives out the soft science of the user. 

There isn't even a good COMMERCIAL home

I am among thousands of people who use Eclipse to do C/C++ in embedded systems. The experience is mostly good. But for the first couple of days of a given project, it is horrible, and for the rest of the duration of a project, it could be considered embarrassing on some points. And no one is doing anything about MY pain points, which are costing real companies real money.

For example, even though I specifically tell the IDE that I'm doing a cross-compiled project, most of what CDT does is based on the compiler of the host, and never even asks which compiler it would properly compile my code with to generate accurate red squiggly lines.

Okay so that's a problem that may or may not be awaiting the proper entry of a bug. I can recite another problem that has a bug, and a decent solution, but the pain is not so well understood that the 90% solution is committed to the tree!

I don't want to hijack by ranting about the specifics, I want to contribute to pessimism in the abstract. I want to say, nevermind your Smalltalks and your live coding implementations, the MAINSTREAM isn't really paying attention to the experience (PX IIUC) problems that exist IN THE MAINSTREAM.

(I don't really mean, "nevermind" I think the work is cool, and obviously Smalltalk had big impact)

I'm willing to believe that

I'm willing to believe that even mainstream companies don't produce perfect software, but I doubt it's for the lack of trying. I've read people complaining of Visual Studio (which you pay tons of money for, unlike Eclipse).

For another kind of example, I've been impressed with the Swift manual that Apple produced — I've seldom seen such high-quality prose in documentation. I had to learn technical writing (like many here), and I felt professionals were at work there — in free docs! But I've also heard of many unfortunate bugs in Swift's first release.

On another chapter, I also loved Odersky's Programming in Scala — only an academic could keep me interested from cover to cover. But I lived through many Scala bugs — I first got to know the compiler team simply through bug reports. Yet, I understand closely the heroic amount of effort people are putting to make things better — one such piece is here: http://lambda-the-ultimate.org/node/5270.

Back to your comment: I'm sure your problems are real and frustrating, but all of us (me included) should really try not attacking projects because they're not perfect, or people just because they don't produce perfect software. Yes, the temptation is strong, but doing that does make things worse.

(Almost?) everybody here programs, and I'm confident our programs aren't perfect on all possible goals, simply because software is complex and programmers imperfect (even excellent programmers). I'm sure I've written my idiocies, though I've stopped keeping count, and I don't think this shows I'm a complete idiot as a programmer. Worse, good developers are prone to perfectionism (me included), which sometimes leads to bad management decisions (which might include why your 90% fix is not applied, though that depends on particulars).

How much money does IBM make

How much money does IBM make selling Eclipse these days? Probably not much, and I think they've released it to open source now. The ability to fix bugs is directly related to the economic benefit of someone fixing those bugs, and as long as we want our dev tools to be free and the economic benefit is merely collective...well...capitalism doesn't work like that. I can guarantee you that JetBrains and Visual Studio do care...because their paycheck depends on it.

re: IBM making money on Eclipse

Eclipse was open source from day 1. (Or, at least, since v1.0, which came out in late 2001, IIRC.)

Their plan at the time was to put out a really high quality Java IDE (which was better than anything else then on the market, free or commercial), capture market share (at which they were incredibly successful), build a community around it (at which they were also successful for a time, I'm not sure how it is these days), and use it as the foundation of the big-ticket IDEs from their Rational arm.

Yes, but I think the plan

Yes, but I think the plan changed. I believe they disbanded the JDT team, at least, if not the entire Eclipse team. The product is completely community driven now.

Visual Studio

Visual Studio is very far from a panacea - at least for C++. Evidently the EDG C++ compiler is used for "Intellisense" - not msvc - at least by UE-4.

Jetbrains works well for Java. Not for Scala.

Mainstream tools are at best dissapointing and at worst poor, in my opinion.

I agree; success requires

I agree; success requires more than caring. The whole dev tool industry is ripe for disruption.

re MY pain points, which are costing real companies real money

What are the wages in question divided by the revenues from the end-product?

i am your fan

"I don't want to hijack by ranting about the specifics, I want to contribute to pessimism in the abstract."