The Programming Language Wars: Questions and Responsibilities for the Programming Language Community

The Programming Language Wars: Questions and Responsibilities for the Programming Language Community by Andreas Stefik & Stefan Hanenberg

video of presentation @ CodeMesh 2014
slides in PDF format
the paper in pdf

A presentation of the results of a number of randomized controlled trial studies of the affects programming language designs on users.

He spends a great deal of time on the need for the programming language creation community to do more of these kinds of trials and why he thinks they are so necessary.
He also spends some time defending the expected attacks that either their goal is either one language for all or a different language for everyone, both of which seemed like odd accusations to me.

Among the conclusions are

  • On average, static typing improves developer productivity under a wide variety of conditions
  • Threads, compared to software transactional memory, leads to approximately 8-fold more bugs among those in the sample
  • Some languages (e.g., Perl, Java) have symbols so poorly chosen that a randomly generated language is just as easy for a novice to initially use
  • Overall, measurements of the impact of Aspect Oriented Programming appeared to be limited and most useful in simple situations like logging
  • C style syntax hurts comprehension compared to alternatives

He says that languages shouldn't be created or changed without this kind of research to back up the ideas.

Only half the interesting detail is in the slides.

questions start @ 34:00

the Perl vs java research wave previously discussed as Perl vs. Random Syntax here on LTU

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

C syntax

I recently converted a Unicode processing library from C to Lua.

Despite having decades of experience programming in C, I was surprised to find that my eyes can scan and I can read the Lua code much faster.

if expression then
...statements
elseif expression then
...statements
else
...statements
end

is apparently easier to scan than
if ( expression )
statement | { statements }
else if (expression) statement | { statements }
else statement | { statements }

If you have to scan your eye back and forth to be sure whether something is delimited then you can't chunk the whole screen unconsciously. It takes time to figure out the organization of C code and verify it.

More experimentation is needed.

More experimentation is needed.

I am wondering about the following kind of syntax:

expression ?
  True: ...commands
  False: expression ?
          True: ...commands(|)
          False: ...commands(|) [?] [?]

where (|) is the single Unicode character for separating alternatives and [?] is is the single Unicode character for the case terminator.

Visually

I've always been fond of the notation used the meta-lisp in one of the first computer science books I owned "the anatomy of LISP."

cond [
condition -> expression
condition -> expression
t -> expression
]

But I don't think it's the fastest to read.

I think the advantage is to ALWAYS-PRESENT, two sided delimiters, so
begin
...statements
end

is better than

begin statements | { statements }

also, even though they look kind of ugly, I think fat, recognizable at a glance ones are better than thin anonymous ones

so then ... else is better than
) { ... }

perhaps "then ... endif" would be better than "then ... end" because you'd have a visually unique end for if's as opposed to other statements. Ugly but faster to read. And I think faster is better than pretty.

I think I've said all of these things before.

An innovation that Lua has that other languages lack is syntax that allows statements to be optionally undelimited even by carriage returns.

That's only possible in a grammar that isn't position dependent. So unlike other languages "[" can only be used to start a postfix index, it has no meaning in a non-postfix postition. "{" only has a meaning in a non-postfix position. "*" only has meaning as an infix operator, it is not a prefix operator etc. "(" is the only exception to that rule, so it is defined as being greedy f(b) disambiguates as function call, it is never f;(b), and if you want to use it otherwise you have to use a statement delimiter ";"

The fact that symbols are unambiguous, not depending on position rules ALSO mean that you don't have to scan your eyes back and forth.

I don't know if taking the already annoyingly sparse ASCII character set and limiting how you use it that much is worth it.

The code does scan fast, but the trade off is every-thing-is-text and very limited, not math-like syntax.

I think there are times when you really want math like syntax. Maybe you want a language with little dialects that you can switch between. Perhaps even one with TEX like math layout.

conceptually clear is not visually fast

expression ?
True: ...commands
False: expression ?
True: ...commands
False: ...commands

is conceptually as clear as possible, but the problem is that "commands" is likely to be long and break up the visual structure. And if you have to distinguish command from commands, that's conceptually clear but visually requires two cases to recognize.

Having more than one case in how things are grouped makes the code slower to scan.

Also conditions in code are often chained, but your structure nests instead of making that common case simple - big mistake visually, it complicates it horribly.

Though I do kind of like the similarity to smalltalk which gave you

condition-expression ifTrue: [statements] elseIf: [condition] then: [statements] else: [statements]

What's nice about that is that the brackets explicitly make what's in them unevaluated blocks, lambdas, and control structures are just messages that blocks or other values understand. So the above is just an expression like any other where the fact that conditions and expressions are only evaluated as needed is explicit.

The above is actually the message ifTrue:elseIf:then:else: with four parameters (which are unevaluated lambdas) sent to a boolean.

Well that's conceptually very clear, and a very terse notation anyway.

Stefik's studies

Since you are interested in syntax: have you taken a look at Stefik's investigations? I think he showed convincingly that syntax matters can be studied experimentally.

A third alternative to the

A third alternative to the line oriented and the pervasive braces {statements} styles is "end" style

if c then statements else statements end

Meyer discusses its merits in both brevity and change safety:

http://www.bertrandmeyer.com/2014/05/17/code-matters/

Unicode delimiters can help readability and safety

Unicode delimiters can help readability and safety, e.g.

if condition 
  then commands 
  elseIf condition 
    then commands 
    else commands endIf

where the above may have some ambiguity problems or alternatively the following which is unambiguous

condition ?
  true: commands(|) 
  false: condition ?
           true: commands(|) 
           false: commands[?] [?]

where (|) is the single Unicode character for separating alternatives and [?] is is the single Unicode character for the case terminator.

If you make a language require special characters

that's great, but how do you expect people to input them?

It's similar to the fact that you could make a visual editor, but people won't like being locked into one editor.

We may get there with the visual editor though, so that people can have mathematical layout sometimes. We have a few universal math rendering technologies, it's the input and editing side that isn't universal yet.

Not that I think your example makes too much sense.

why isn't "elseif" enough of a delimeter? Also, why are you indenting as if nesting things that are chained rather than nested?

Unicode using keyboard chords instantly converted by IDE

Unicode characters can be input using keyboard chords that are instantly converted as they are typed by IDE to Unicode characters.

The principle is

error free readability is important.

I used to think that saving keystrokes was important, but that was only approximately true.

Readability is paramount and conciseness can help

Readability is paramount and conciseness can help.

My reaction is mostly negative

I agree with one thing, which Stefik would probably identify as his primary thesis: it would be nice if people did more well-founded scientific studies on the practical efficacy of programming language design. He is absolutely correct that there is a shortage of this kind of research, and more good, methodologically sound work in this field would be a great benefit to anyone interested in the design of programming languages.

I disagree with pretty much every other conclusion suggested in Stefik's work, such as his paper (with Stefan Hannenberg) "The Programming Language Wars". The paper seems to imply not just that empirical studies are good, but that language designers have a responsibility to stop building this chaotic farrago of unscientific languages, and instead incorporate such empirical studies as a necessary prerequisite.

That suggestion is grossly impractical, for at least two reasons.

For one thing, there are good (though regrettable) reasons why this kind of empirical study is rare: they are expensive, time-consuming, and extremely methodologically challenging. Any one of these problems would make "scientifically-based" language design of this sort impractical. (By the way, if you are interested in why and how of this kind of study, I recommend the O'Reilly book "Making Software", an anthology on empirical studies of software engineering in general, not just of language design)

The other problem, I think, is that Stefik has made a category error: language design is an art, not a science. The design space is too large, and too poorly understood, to be anything else.

Oh, there is a third reason Stefik's suggestion is impractical: it's simply not going to happen. Language designers will continue to happily exploit whatever tools come to hand, whether they be mathematical or empirical -- but I don't think they'll wait for either. And that's a good thing.

In the real world of design

In the real world of UX design that my wife works in, empirical studies are quite rare for the same reasons that you point out. I would like better empirical tools at my disposal to answer questions, but I wouldn't be beholden to them. Stefan at least has been very honest about the limitations and challenge of empirical research, and I think the papers take overly strong positions to make a point even if the reality is more nuanced.

Anyways, this comes up a lot, and will come up again next week. I wish we could be honest with our opinions, but I think political correctness dictates just saying yes, we need to do this even if the real feelings are very different.

Programming language design is still in its infancy

Thanks Sean!

These are very interesting experiments.

There is a lot more that can be done with Unicode and interactivity.

Cheers,
Carl

Actually...

...I don't think we need to do these kinds of statistical tests, at all. Basically, almost every design question has a sufficiently huge space of models that there isn't actually enough data to draw statistically reliable conclusions, even if you had access to every program ever written. Feature interactions blow up your space of models, and the generativity of programming blows it up with nuclear overkill.

So unless you come in with a theory that tells you what you can safely ignore, statistics can't help you. If you do have that, then a well-designed experiment can be very helpful, but it will generally be quite specific and narrow in applicability. A really nice example is Marceau et al's Measuring the Effectiveness of Error Messages Designed for Novice Programmers, which is able to draw statistically meaningful and useful conclusions by carefully restricting the scope of the experiment.

But to imagine that this kind of approach can be extended to whole-language design is utter nonsense.

This is a good point, and

This is a good point, and will help in the battles ahead.

We know empirical can work in the small, and there are definitely useful knowledge to get like Fitts' law. But...empirical validation of even a bit more complexity...doesn't end well. The problem we have is that our alternative evaluation mechanisms (the proof or the benchmark) aren't useful either from a design perspective.

So our CS conferences want to admit more design work, but the work must also be accompanied by a science aspect (e.g. a proof or benchmark) because only science is worth publishing.

Sarcasm, I hope

"because only science is worth publishing."

You're reminding me of my father who was a professor of drama. He was pissed off when a new department head insisted that he spend his time publishing research (what is "research" in drama anyway) instead of doing useful things like putting on productions with students.

So he had to find letters by dead authors and publish "research" that no one will ever read in journals that no one has ever read, that only exist so that people like him can publish and save their jobs.

I was amused to notice my sister doodling on the back of historic letters though. She wasn't young at the time, she was just a horrible person >.>

Sarcasm? I'm not sure how

Sarcasm? I'm not sure how else it can work. Ideas and designs are a dime a dozen, how do we sift through them to separate the wheat from the chaff? We need something.

a second presentation @ University of Washington

video
contents very similar but questions start @ 44 minutes in.

Threads "lead to bugs"

but maybe that just means that you need more experienced programmers when you use threads?

Microsoft has done a lot of underhanded things to try to keep programmers from writing multithreaded code.

Early on I wrote some programs where processing ran asynchronously with user interaction, on a model where the window and the processing involved messages posted between separate threads.

An advantage to that is that the user interface never locks up, the window is always ready to move and redraw even when the program is busy. Some kinds of processing will disable some menu items till they finish, but they won't lock up the computer's user interface.

... I found that Microsoft's updates to their MFC user interface libraries put ever more hidden obstacles to that program running over the years. There is a reason that Window programs lock up the user interface while processing, Microsoft is trying to force programmers to avoid multithreaded programming.

Without documenting the fact, they place handles into thread local memory, so that objects can only be accessed from the thread that created them. Without documenting it, they later stopped threads from posting user interface messages to each other.

They made the program break, then they made it break again.

They obviously believe that programmers are too stupid to use threads.

Wouldn't it be better to just educate them on how to use threads safely? Wouldn't it be better to create thread-safe data structures and libraries and algorithms for them to use?

Wasn't what I was trying to do valuable? Wouldn't, for instance, it be useful to define a user interface standard that doesn't poop all over your screen whenever a program is processing? I get tired of windows disappearing in windows 7 because Windows will hide a window you try to interact with that's been busy too long.

It's not threads, it's uncontrolled sharing

Sure, sufficiently skilled programmers can write reliable code with uncontrolled sharing, just as sufficiently skilled programmers can do the same with uncontrolled GOTOs (including jumping into procedures that haven't been called yet). But in both cases it's far better not to.

It's not threads, it's the non-management of time.

And then if you don't have that, it is uncontrolled sharing.

As an analogy, some programmers want to rely on garbage collectors, others prefer to the care to be in control and not take the hit.

Massive concurrency requires automatic storage management

Massive concurrency requires automatic storage management (often called "automatic garbage collection."

They also rely on time

They also rely on time management (automatic coordination of state updates).

I need to learn about this

I don't even know what the options in reactive programming are.

I was surprised when some mention of Alan Kay's project said something about having a clock and doing discrete time steps like a video game.

They made it sound like that solves some sort of problem with reactive programming but maybe they just want an animated user interface, and picking a frame rate and sticking to it is really the only way to animate.

That is one way to do it,

That is one way to do it, transactions are another. You'll find many different automatic coordination scheme out there, usually with a cost that the high perf crowd doesn't want to pay (like they don't want to for GC). Rust is trying to solve both problems more statically.

Glitch combines both approaches, throws in something like React, but is very researchy.

not managing whatever burns you

Bugs are caused by programmers not knowing what their code does, or when it does it, or whether code elsewhere tries to use results before data exists or after it is gone. Messes lead to bugs, as well as mental lapses, or simple indifference to knowing what ought to happen. (I always have some coworkers who feel understanding is optional, with less than ideal results.)

Part of programming involves thinking. One of my sons is learning to code lately, and I tell him good results don't occur magically -- if he doesn't have an idea why his code should work, it probably won't.

Language features making it hard to form a good mental model are a hazard. Some semantics of features are just tough to grasp for beginners, like pointers or independent flow of control like threads that can run in ways that overlap according to the model. A language might aim to clarify those for users. Hiding complexity on purpose, to keep a programmer in the dark for their own good, is perhaps not a good idea if it means the mental model is wrong. When sharing and timing both have consequences, not grasping them breaks whatever the plan was.

Educating programmers ought to help, as well as designing clear and unambiguous model definitions in the first place. (Some folks feel if it was hard to design, it ought to be hard to understand too, but that's not required.)

Cause and effect should be visible to a programmer, unless a tool makes it impossible to get wrong. Tools should make time and space issues visible and manageable if it can be done wrong.

On a side note, the high perf crowd finds refcounting acceptable as GC when reclamation is not monolithic (so collect is incremental instead of big bang.)

mental models

I fully agree with your assertion that "language features making it hard to form a good mental model are a hazard." But I don't believe this is contradictory to managed features like time or space.

Rather, we can still have managed space, e.g. using GC. But maybe we should avoid difficult to predict behaviors like imprecise closures and laziness, and favor predictable models like uniqueness or acyclic data with refects. And we can have managed time, but maybe it should also be more explicit in the model, e.g. using temporal logic or FRP or a Chuck-like scheduler.

The important difference here isn't managed vs. unmanaged; it's implicit vs. explicit.

peripheral thoughts

It's refreshing, for someone like me who hasn't drunk the Kool-Aid on lazy evaluation, to hear someone else articulating why it's bad.

Preference for explicit over implicit is, of course, one of my arguments for fexprs instead of macros/quotation.

laziness

I don't have experience with normal order laziness yet, but I thought it was interesting to take a real program that needed something like it and see if laziness helped.

As formulated for Haskell, not so much, but I think Haskell could be improved.

Think about this, I had a sound processing platform that was going to subject a preexisting file to allowed multiple stages of filters. Some of those filters would be filter banks, even critically sampled filter banks that change the sampling rate..

Each filtered moment in time depends on multiple moments of time in the previous stage, but you also want to process more than one sample at time so that you can optimize loops and interleve them.

So my design used ring buffers in C++ and I hoped that if I made the sizes of the buffers correct then when I pulled data through I wouldn't end up calculating anything twice... I don't think I got that, but I didn't debug it well enough to be sure. It was always slower than I expected.

You'd think that lazy processing would save you from needing the buffers, because you could use data and it would disappear as soon as it wasn't needed. But no that's not right:

1) if the program has the ability to decide to back up, then no data will EVER be thrown away. Haskell memoizes and has no notion that some references aren't worth saving data for. It has no notion that some data is better recalculated. That sometimes memory pressure should count.
2) Haskell has no way to express "do more work now, it will be needed later and we can optimize the loop"

Problem 1 is the general one though. Normal order evaluation with memoization is a cludge that doesn't have enough options to be useful in the situations where you want it. It's too much a mathematicians abstraction and too little an engineers tool.

Of course it wouldn't be useless there. You solve problem 1 by hiding the fact that you can seek from the compiler.

I have another problem I'm working on now that has to process in stages. The C library I got it from processes data in large buffers and spends most of its time on cache misses. If it was written with lazy evaluation or something equivalent, then it would be much faster.

I translated the C program to Lua and I'm going to change it to processing the it as streams with co-routines. That's not as automatic as lazy eval, but Lua has co-routines not laziness and it will be more efficient in that it can reuse tiny buffers instead of having to garbage collect like lazy eval.

The nice thing about lazy eval for this is that the program would let you reorder the problem in the most concise way. That's useful too.

Laziness is a failed experiment

... or perhaps a successful experiment with a negative outcome.

Laziness has a few really nice results, such as: `kmin k = take k . sort` being an optimal implementation of the k-minimum problem (sorting only as much of the list as needed). However, laziness doesn't help at all if you need to return any success/failure result (which is not uncommon, e.g. when parsing). And, too often, laziness blows up.

I consider the development and adoption of iteratees and successors (conduits, pipes, machines) to effectively mark the end of laziness outside of niche applications. Oleg Kiselyov demonstrated how easily a wordcount program blows up with laziness, and argued convincingly for processing models with a predictable space requirement. All Haskell web application frameworks use explicit chunking rather than relying on laziness.

Purely functional data structures that don't rely on laziness have also come a long way. Especially important, IMO, was the development of finger-trees in 2006, which allow O(1) deques, O(lg(N)) concatenation or split, and an effective model for vector-like structures. I don't believe there is any need, at this time, for laziness to make FP practical.

Sigh

I've been a programmer since the 80s... But I use, say C++, Java, scheme, ruby, Prolog. I don't actually LIKE Haskell, though I have nothing against ML. How long do I have to program in Haskell or Scala before I can make head or tail of a presentation like that?

a few hours

Haskell has a relatively long learning curve compared to typical imperative languages, but most of it is for new concepts like grokking monadic interfaces for the first time. While learning rates can vary, typical experiences are around sixty hours to become proficient. Only a few of those would be dedicated to grokking any particular aspect. The best way to learn machines would be to implement a few.

Finger trees are a bit trippy. But you don't actually need to grok them in order to use them. Just import Data.Sequence, and learn an API.

Postponed execution is a good way to do co-routines

Postponed execution is a good way to do co-routines. For example,

Fringe.[aTree:Tree◅aType▻]:FutureList◅aType▻ ≡ 
  aTree � Leaf◅aType▻[x] ⦂ [x]
           Fork◅aType▻[tree1, tree2] ⦂ [⩛Fringe.[tree1], ⩛Postpone  Fringe.[tree2]]

zippers and streams

Postpone is, more or less, just another way to express laziness.

For fringe, we can do it just as efficiently by modeling zipper to efficiently walk the tree, then wrap it within a head-strict stream (e.g. type Stream a = Done | Next a (Unit → Stream a)) to hide the zipper state.

An intermediate zipper is useful because we cannot efficiently append such a simplistic model of streams. However, there are stream models that support efficient append, in which case we could skip the zipper. Strict, pure FP does force developers to think carefully about data structures.

"Postpone" can be used to explicitly mark expressions

David,

Postpone can be used to explicitly mark expressions whose execution can be postponed.

The "zipper" program above is incredibly obscure and does not implement Fringe.

Regards,
Carl

Why do I even bother?

I said we can use zipper to walk the tree. (That's what zippers do. Zippers walk trees.) I did not suggest that a zipper directly implements fringe all by itself. But all you need for 'fringe' is to walk a tree and output the leaves. And we can do this iteratively, e.g. for efficient 'same-fringe', via a stream.

I don't see how the 'incredible' obscurity of zippers is relevant. If I thought obscurity was relevant, I'd call you out on the far more extreme obscurity of ActorScript, which lacks even a public implementation.

Elegant, efficient implementations of "Fringe"

It would be interesting to see what you think is an elegant, efficient implementation of Fringe.

Sure

Here's an implementation that doesn't depend on laziness.

type Tree a = Leaf a | Node (Tree a) (Tree a)
type LTZ a = (Tree a, [Tree a]) -- left-biased zipper

ltzStep :: LTZ a → (a, Maybe (LTZ a))
ltzStep (Leaf a, ts) = (a, uncons ts)
ltzStep (Node l r, ts) = ltzStep (l, r:ts)

fringe :: Tree a → Stream a
fringe t = stateToStream (fmap ltzStep) (Just (t,[]))

I expect streams, stateToStream and uncons, would be available as generic utility functions. But I'll add them below for completeness.

type Stream a = Unit → Maybe (a, Stream a)

stateToStream :: (st → Maybe (a,st)) → st → Stream a
stateToStream fn st Unit = case (fn st) of
  Just (a,st') → Just (a, stateToStream fn st')
  Nothing → Nothing

uncons :: [a] → Maybe (a,[a])
uncons (x:xs) = Just (x,xs)
uncons [] = Nothing

Surely you are joking!

David,

Surely you are joking!

You can't possibly believe that the above is a concise, elegant, efficient implementation of Fringe.

Regards,
Carl

6 lines

The code dedicated to 'fringe' is six short and comprehensible lines of code, including type declarations, after we discount the generic code and the tree definition. I could probably make it shorter, but I'm not sure how well you grok point-free code. And it is efficient: each part of the tree is walked only once, and the number of allocations is small and predictable for each step.

So, no, I'm not joking. And my name's not Shirley.

If this is the best, then "zippers" will never gain traction

David,

If this is the best that can be done, then "zippers" will never gain traction.

Regards,
Carl

PS. Also, you left out types from your implementation :-(

Meh

Carl,

With all due respect, you are an idiot.

Regards,
not John (Carl has since corrected his address from John to David)

PS. Also, you left out types from your implementation. :-(

Insults are not good argumentation

Dear notJohn,

Insults are not good argumentation.

Regards,
Carl

I'm sure

Carl,

Thanks for telling me this. I acknowledge your massive experience and expertise with bad argumentation.

Regards,
David

Inconsistency Robustness provides a theory of argumentation

The book "Inconsistency Robustness" provides a theory of argumentation about inconsistent information.

It is left as an exercise to the reader to apply the theory to this particular case.

exercise left to reader

Thanks for telling me this. I always suspected your arguments on LtU were exercises in inconsistency and our robustness against it.

Unfortunately, you didn't get a good grade on the exercise

David,

Unfortunately, you didn't get a good grade on the exercise.

Regards,
Carl

Fully typed implementation of "Fringe"

John,

Contrary to your claim above, the following implementation (re-iterated from above) is fully typed:

Fringe◅aType▻.[aTree:Tree◅aType▻]:FutureList◅aType▻ ≡ 
  aTree � Leaf◅aType▻[x] ⦂ [x]
           Fork◅aType▻[tree1, tree2] ⦂ [⩛Fringe◅aType▻.[tree1], ⩛Postpone  Fringe◅aType▻.[tree2]]

Types

Carl,

I don't see the implementations and type declarations of FutureList, nor of Postpone, nor even of Tree. So, it appears to me that you you have left types and other content out of your implementation.

Regards,
not John

Elegant, concise, efficient implementation of "Fringe"

The above is an elegant, concise, efficient, fully-typed implementation of Fringe.

Of course, FutureList is a type and the implementation of Postpone can be found in another recent post to LtU (below).

The definition of type TreeaType▻ is dead trivial and can be found in the referenced article on arXiv.

Elegant, efficient, pure, strict implementation of "Fringe"

The few types missing from my own implementation - Maybe, Unit, [a] - are also 'dead trivial', and very generic.

type [a] = (a:[a]) | []
type Maybe a = Just a | Nothing
type Unit = Unit
fmap :: (a → b) → (Maybe a → Maybe b) -- in instance used

And I'm not sure why you consider six short lines (three of them for simple types, which are easily inferred and thus could be removed) to be something other than 'concise', but I expect that's another exercise for your involuntary students in inconsistency robustness.

Example of obscurity that will never gain traction

David,

The following kind of obscurity from you implementation is never going to gain traction:

ltzStep :: LTZ a → (a, Maybe (LTZ a))
ltzStep (Leaf a, ts) = (a, uncons ts)
ltzStep (Node l r, ts) = ltzStep (l, r:ts)

fringe :: Tree a → Stream a
fringe t = stateToStream (fmap ltzStep) (Just (t,[]))

Regards,
Carl

obscurity

I'll acknowledge your above argument the moment you promise to never again, on LtU, provide code samples from obscure languages that lack widespread public implementations.

Concepts and principles are more relevant than implementations

Concepts and principles are more relevant than particular implementations.

However, meta-implementations (like the one for Postpone below) can be helpful.

Also, obscure code is not a good thing regardless of whether or not it has been implemented.

* Laziness is a cost people pay for purity.


laziness and purity

Laziness isn't necessary for purity. It's very easy to develop pure languages without laziness. Much easier than adding laziness.

Purity is pretty useful for laziness, though. I have read that laziness was a design goal of Haskell, and purity was a design decision to make it viable. Though, I think this was a reddit comment, so I don't really trust it, and I haven't bothered verifying.

I agree with the second point

Indeed, I would strengthen it: a pervasively lazy language without purity is a hot mess. Of course, you can add selective laziness to impure eager functional languages (Scheme, Pure, ML) without bad effects, as long as you are careful.

Preference for explicit over

Preference for explicit over implicit is, of course, one of my arguments for fexprs instead of macros/quotation.

It's interesting that you consider fexprs explicit and quotations implicit, when I would classify it completely the other way around. Quotations are explicitly classified by type, and you can't invoke them using the normal evaluation methods, ie. you can't confuse a function value with a quoted function by invoking the latter as you would the former.

Unless I've deeply misunderstood something about fexprs, every expression is implicitly a quotation, and normal evaluation implicitly conflates the two so you can easily confuse the two by triggering an expensive evaluation where one wasn't expected (akin to like laziness).

Yeah what he wrote does seem sort of backwards

although if lazy evaluation is the default like in Haskell, that's something very heavy weight that's implicit.

John's notation marks Fexprs with a $, but he didn't go so far as to make it a run time error for a variable without a $ to hold and apply a fexpr so they're only explicit on a voluntary basis.

Meta is preferable to Fexper

Meta is preferable to Fexpr. For example Postpone can be defined as follows:

⦅“Postpone”  anExpression:Expression◅aType▻⦆ ≡
   Actor implements Expression◅Future◅aType▻▻ using 
     eval[anEnvironment]→
        Future Actor implements aType using                                  
                      aMessage → 
                        Let postponed ← anExpression.eval[anEnvironment]。
                          postponed.aMessage   
                              become postponed

Would it be too much

to ask you to break down and explain the code you post?

You're not using a notation that that I understand I'm I'm sure that I'll never be able to make head or tail of it.

Explanation of code is published

Josh,

Good question!

An explanation of the code is published on page 43 of ActorScript.

I will be happy to answer further questions that you might have after you have read the article.

Cheers,
Carl

Thank you, I'll read that

with luck, today.

every expression is

every expression is implicitly a quotation

No. You're thinking of quotation as doing something more to its operand than would have been done if the operand had been evaluated. It's more useful to think of quotation as doing less. From that perspective, fexprs don't implicitly quote anything; rather, applicatives (non-fexpr procedures) explicitly cause their operands to be evaluated — and then apply an underlying fexpr to the results of the operand-evaluations. Hence the title of my dissertation: "Fexprs as the basis of Lisp function application".

My experience while developing my dissertation was that, in practice, accidental bad hygiene only happened if one made the mistake of introducing explicit quotation. Writing code in terms of doing things, rather than in terms of not doing things, avoided getting tangled up.

Then there's the mathematical side of things. If you set out to construct a rewriting system for the semantics of Lisp with fexprs/quotation, and you base it on the premise that all expressions are to be evaluated unless they occur in a context that says "but don't evaluate this", you will end up with a trivial equational theory. If, on the other hand, you start with the premise that source expressions are just data, and are only evaluated if they occur in a (necessarily, non-source-code) context that says "evaluate this", it's quite easy to build a rewriting calculus that has a strong equational theory, Church-Rosser-ness, etc. (Fwiw, I did write a blog post about that back in 2013, Explicit evaluation.)

Meh quote is just broken since Lisp added static environments

If you redefined quote to do what your fexpr call does, return a list AND the local environment, then the problems go away.

So if quote wasn't totally broken then there's no argument. Yes quoting is implicitly broken, but if it was redefined to work there is no problem.

Macros are sort of currently broken, but they're broken in a way that only effects people who write or maintain macro libraries. It's similar to the fact that C++ templates are horrible, but that only effects people who write a template library, not the people who use one.

The new hygienic macro systems that replaced the older macro substitution systems are much less expressive than what they replaced, they plagued by kludges, they hide the wrong information. At least the versions in Scheme are broken, (I think clojure has macro system that no one writes about but which is much better). I realize that's a provocative thing to say - it really demands a long essay to support, but it's true. These systems are very very poorly engineered, very incomplete, very inexpressive.

The worst code in the world is what happens when they they need a non-hygienic macro like what it takes to make instance variables work inside of method definitions in an Object system. Look at the code for that in Racket, but have the suicide hotline on speed dial. Also, no matter how long you look at it, it won't make sense, it's 100% kludge.

In these systems symbols are no longer atoms, and they have hidden information some of which you can't examine or change or even set reasonably...

But even situations are aren't rare, the macro system broken. Take a look at the boilerplate you need for make-set!-transformer.

So yeah I'll grant you that macros are broken, and that's also sort of implicit.

But well engineered macro libraries are possible. And it would take so much less engineering to make a macro system work well than it would take to make the equivalent Fexprs fall out at compile time (which you really want some times) that macros have to win over fexprs.

smoothness

Macros inherently limit the abstractive power of a programming language, per the smoothness conjuecture (as best I recall, the most complete discussion I've written of the smoothness conjecture, outside my dissertation, was in my blog post on abstractive power).

It's interesting you say quotation is broken by (if I'm understanding you rightly) static scope. One of the things I found ironic about the history of Lisp is that fexprs are most ill-behaved and least useful in dynamically scoped Lisp, whereas lexical scope enables better behavior and makes them much more useful; yet, fexprs were abandoned in favor of macros just as the lexical scope that would have made them viable was coming in. That doesn't mean fexprs would have been done right if they'd been retained in mainstream Lisp, of course; sometimes you have to make mistakes in order to learn. But fexprs are a much less interesting feature under dynamic scope.

Macros (although awful) play well with compilers; Fexprs don't

Macros (although awful) play well with compilers; Fexprs don't play so well with compilers.

more of a challenge, yes

Static optimization is more of a challenge, certainly. There are things that can be done to make fexprs more optimizable, some of which I've done in Kernel. Key is that you need to be able to limit who will be able to modify an environment; Kernel makes provisions for that. When the compiler knows the bindings are stable, it knows which subsexpressions will and won't be evaluated. Unfortunately, I haven't been in a posistion to pursue the static optimization of Kernel myself, so I don't have hands-on experience of exactly how far it does (and doesn't) extend.

granted

I'm in a bit of a hurry so I haven't read your link, I assume it points out that macros destroy higher order functional programming. I think I may have followed that link before.

I know that macros are limiting.

I also know that quote and fexprs were somewhat broken even before the switch to static environments, they were just MORE USEFUL and LESS BROKEN.

If you can't capture an environment and pass it around and eval using it then any quoting/fexpr mechanism is broken. If you can pass around environments, then the distinction between static and dynamic environments goes away - code run in an environment saved for it, IS in a static environment, it's merely explicit.

When the environment was dynamic, then you could quote an expression and eval it - and as long as you were in the same environment the meaning of the symbols hadn't changed. You could pass it to a fexpr and as long as the parameter and variable names in that fexpr were ridiculous enough, then it would be safe to eval it with the meaning intact. But quoted expressions weren't really first class values, because their potential evalled meaning changed when the outer environment changes. You can't store it and eval it at an arbitrary time in an arbitrary place.

Better than the static version, but still broken.

Someone who defends the hygienic macros would point out that passing around an environment isn't enough, if you intend to mix in code - you need the mixed in code to have its own environments and you need to keep those separate from the environment of the original code. The system used in scheme (broken as I said) was to use symbols that have their bindings implicit... Not inherently bad, but very badly implemented.

I'm sure that Kernel's solution is to pass around environments during evaluation so that each level of evaluation happens in an appropriate, possibly specialized environment - and to alter the environment when necessary. That all works and isn't broken like Scheme, you don't have to explain any of that to me.

One of the weirdnesses with the scheme macro system is that when they went from the old macro system to the new one, composing macros broke. So you couldn't compose macros made with the old system into macros made with the new one. I wish I could remember an example.

So I guess there's a compile time equivalent to higher order functions, higher order macros, and at least you want those to work.

I also know that quote and

I also know that quote and fexprs were somewhat broken even before the switch to static environments, they were just MORE USEFUL and LESS BROKEN.

It sure seems like we're talking about different features. My experience is diametrically opposite to what you say here: dynamically scoped fexprs are relatively useless, wheresas statically scoped fexprs are immensely powerful — if handled well. Specifically, if the local environment of a fexpr call is its dynamic environment, then the body of the fexpr has no reliable meaning, whereas if the local environment is its static environment, it may have a stable meaning, depending on how stable its static environment is (and I've tweaked the design here and there to help with that). The dynamic environment is provided in the same way as ordinary parameters, i.e., it's bound to a symbol in the local environment of the call.

If you can't capture an environment and pass it around and eval using it then any quoting/fexpr mechanism is broken.

That's pretty much true, yeah. But dynamic scope is a poor way to make the dynamic environment available to a fexpr; passing it as a named parameter to a statically scoped fexpr is more effective.

look

in a dynamic lisp
(let ((x (eval(quote(BLAH.BLAH.BLAH))))
x)

has the same result as
(let ((x (BLAH.BLAH.BLAH))
x)

where BLAH.BLAH.BLAH is some code referencing the local environment.

to be more specific
(let ((y BLAH)) (let ((x '(list y))) (eval x)))
is the same as (list BLAH)in a dynamically scoped lisp while in a static one, it can't resolve y.

In a dynamic environment, you can use quote to give you access to the structure of code, and eval it. You can take that code apart, change it, and eval it as long as you eval it at the same level you quoted it, then the meaning of the variable reference hasn't changed.

In a static environment, the eval only uses the top level environment. You can't use it the way you could a dynamic macro. The old way, you had BUGS due to aliased variables and you couldn't store quoted code and expect it to work at other levels (not first class) but OTHERWISE you could make it work. The new way it simply doesn't work at all.

About the Common Lisp misfeature

About the Common Lisp misfeature called (rather perversely) by the name eval, I suspect we are in full agreement. Our difference in that regard would seem to be only on whether that has anything to do with static scope. In order to be remotely useful for, well, anything, eval needs to be all about empowering the programmer to control the environment in which the evaluation will be done.

I've observed that it's ridiculously error-prone to use eval without explicitly specifying which environment to use; so in Kernel, eval requires a second argument specifying the environment.

It became clear to me pretty early in my exploration of Kernel that the easiest way to demonstrate various kinds of hygiene violation is to use quotation. In fact, I found quotation (to say nothing of quasi-quotation) makes it insanely easy to produce accidental bad hygiene. Like eval without an explicit environment argument, quotation is just plain error-prone. Kernel has no syntactic sugar for quotation, and $quote is not provided as part of the standard language. It is of course dead easy to define $quote as a fexpr, but programmers are strongly encouraged to not do so. Explicit evaluation isn't just a language design principle, it's a programming style (of course, following the principle that programming is language development, there's really no difference between language design style and programming style). For example, in my dissertation when writing a Scheme evaluator in Kernel, it's necessary to check an operator to see if it's some particular reserved symbol (such as if or define). I do that with

($define! $make-tag-predicate
   ($vau (tag) #ignore
      ($lambda (x) (eq? x tag))))

The unevaluated operand never escapes from the local environment of the call to $make-tag-predicate.

It took me a while, looking at your last example,

(let ((y BLAH))
  (let ((x '(list y)))
    (eval x)))

to remember that Common Lisp has a feature called eval that always uses the standard environment to do the evaluation. The memory, once reacquired, is a vivid one; when I first discovered that about Common Lisp, I immediately thought some really altogether impolite things about the putative intelligence of the designers of Common Lisp. The cleaned up version would be  (1) why would anyone bother to include such a useless feature in the language, and  (2) if one did choose to include such a useless thing, it's actively deceptive to call it by the same name as a useful eval feature common to various better-designed Lisps. This underlines the value of the Kernel requirement to provide an explicit environment argument. In Kernel —if one were to write this at all, which one really oughtn't since it uses explicit quotation— one would write

($let ((y BLAH))
   ($let ((x ($quote (list y))))
      (eval x (get-current-environment))))

substitution and renaming

John, I'm a fan of your work on Kernel, but Josh's comments totally ring true. I feel like I'm watching two smart people talk about the same thing right past each other, but I'm not sure how to help you connect.

I've mentioned Mathematica to you a few times when discussing fexprs, but I think I have to mention it again now. It's a rewriting language with (primarily) dynamic scope. For the most part, the only actual mutation that happens is of the global map of symbol bindings. However, explicit renaming in the semantics of various binding forms gives the illusion of static scope.

Having said this, moving symbolic expressions between contexts is extremely natural in Mathematica. Furthermore, non-standard evaluation is very common in library functions (and is even available to users) via a not-quite first-class mechanism: You add a global attribute to a global symbol to distinguish between applicative-order evaluation and fexpr-like non-evaluation of arguments (called HoldAll, etc).

In a sense, Mathematica is entirely built out of dynamically scoped fexprs. And it's anything but useless.

The smoothness paper...

Taking a glance, I see that it isn't about what I assumed.

It's not about how macros can't be substituted for functions as parameters to higher order functions, it's (I guess) about how macros that need to examine the code they transform are brittle, breaking if the underlying language accumulates new macros.

I say "I guess" because I didn't read the second half (I'm saving time) and I somewhat doubt my ability to follow the sort of notation and mathematical knowledge you use in it anyway.

I'm very disappointed in the limited way people use macros. Your problem doesn't come up in practice, because no one writes code-walkers. Or only the original compiler writer of a Scheme system writes one (for optimization phases), and it's never made abstract, useful or documented for users. I know there are code walker facilities in Racket for instance, but no one will ever live long enough to understand how to use it.

That's a problem I'm interested in. I want to write a metaprogramming system.

There is more than one sort of facility that would help.

For instance it should be obvious that you don't have to recognize and handle any new synthetic transformers if you expand the those transformers in your input before you walk it. There are facilities for that in Racket.

I suppose I need deeper thought on this, but you could have an order on transformers, where they are expanded in an order more like their defined order than the order they show up in the code.

Not managing...

Well the problem with shared state and concurrency is that it created combinatorial nightmares out of simple programs.

You have to be aware of what you've stepped into.

And the only solution if you write a program at that level is a very arduous proof. It's very advanced engineering.

But it's useful too, because some level of the system has to be written like that if you want it to be as fast as possible.

So you need tools so that your routine problems don't blow up into that kind of mess. In the case of concurrency, the easiest to use tool is tool is to have queues to communicate and avoid sharing.

yes, queues

I reply only because I like your last sentence, which I agree with, despite it implying that concurrency is needed for sharing to be a problem. (I see a lot of sharing problems in single-threaded code, especially with crazy sub-systems failing to agree on invariants; serial conflict can have results similar to concurrent conflict.) I may not follow up again -- lately I aim to minimize how bored I get.

When I say manage I mean by someone, sometime: maybe the programmer or maybe the runtime. Folks often lately seem to assume it means authoritative runtime control. Since I'm sorta anti-authoritarian, that seldom occurs to me. Absence of management is random or uncontrolled, like covering your eyes and throwing darts. (Should I worry about timing? Oh, it will probably work out fine.)

I was trying to suggest specific things like threads are not evil in themselves, and that creating messes is the problem. Any given tool with the potential to cause a mess can be used in a more controlled fashion, like the one you mentioned: put some order into how things interact (queues are a good idea) so what happens will follow rules a programmer is likely to grasp and reason about correctly.

The subject of this post was going to be a standard punchline: "this is why we can't have nice things." But I was afraid you'd take it as criticism. Actually it's just an ironic observation that folks like to outlaw things that ever caused a problem. From now on we only use spoons, because knives and forks cause injury when wielded by lunatics.

I'm done, so this last bit is a random aside. In the early 90's I used to participate in arguments with folks who wanted to make all objects thread-safe, because this would magically fix all concurrency problems. I was in the camp that tried to explain with counter examples. We'd say, look, when you take two of these thread-safe objects and use them together, where some invariant must be maintained, now you need yet another synchronization mechanism to protect that. It doesn't help that individually the objects were thread-safe. What you want to do is design so domains separate things to minimize conflict, not slather locks all over everything (which would be slow). In other words, interaction should be planned, as opposed to making all possible future interactions safe through a magic bullet. Messaging and immutable state are great management tools that complement when locks are the right (simpler and effective) answer. But I don't have conversations like this any more, because they're really boring. It's like explaining for the Nth time that local variables are gone after a C function returns. Anyway, thanks for steering folks toward queues.

My first reaction to Clojure

was to be appalled that it was pushing immutable data structures.

At the time that seemed to me like replacing all of your silverware with spoons.

Now that I have experience with Racket depreciating mutable cons cells, and realizing that I rarely need them, I'm a bit less appalled, but in Racket I DO use mutable structures when I need them, and I found myself replacing all of the routines that were typed to use immutable cons cells with routines that don't care. I don't know if you can tell Clojure's library designers to go hang that way.

java.util

java.util already has lots of wonderful mutable data structure implementations. Real Clojure code can and does use them!

However, the important thing is changing the default.

Immutable collections are also available in other libraries, like Guava. Although, 1) Clojure's implementations are very good, if not strictly better and 2) the default. Being the default means that everybody will use them without thinking and being good means that nobody will avoid them unless they have to for a particular use case.

Actors should communicate directly and not use queues

in general, Actors should communicate directly using addresses and not be forced to use queues as intermediaries.

I don't really understand

I thought actors was a model of computation.

I remember someone posting a model of a locked, shared memory location as an actor.

The actor accepted a message of what its value was supposed to be, responded to queries of that value and looped through recursion.

It certainly had the same semantics as memory location that's only accessed through locked instructions, but no one would ever actually implement it that way.

I started my programming life with assembly language, my underlying model isn't mathematical, it's what instructions will execute, and what the state of the caches will be...

So when you say "no queues" I imagine that you're you're insisting on a locked 1-message box at all times. Well, that's very inefficient. A thread safe nonblocking queue is already pretty inefficient. If we can't afford to wait for an actor to be ready for a message then in your system we'll just end up building a queue out of actors right? And that will be even less efficient. Why can't you have an actor that has a queue as one of your types so that we can have a more efficient implementation?

Queues can be implemented as Actors with "put" and "get"

Queues can be implemented as Actors with "put" and "get" messages.

In general, it is very inefficient to use such queue Actors as intermediaries.

Is the point here

to use actors as a model when you analyze code or a machine but not as an actual way to implement that code or machine?

ActroScript versus Actor Model

Josh,

Good question! Roughly speaking,
* Actor Model for modeling any kind of (concurrent) computation
* ActorScript programming language for implementing Actor Systems

A random question about actors

This is a quote from the wikipedia article on you (don't blush):

Sussman and Steele developed the Scheme programming language in an effort to gain a better understanding of the Actor model. However, their Scheme interpreter was not capable of fully implementing the Actor model because Actor customers cannot be implemented as lambda calculus continuations and Actors can change their local state in a way that is impossible in the lambda calculus

I want to know what could possibly be hard to do in scheme that you need for actors.

Of possible interest

Dale Schumacher at It's Actors All The Way Down explored the Lisp/Actors relationship with a series of four posts on Kernel, starting with Fexpr the Ultimate Lambda.

Definitive article on Actor Model

The currently most definitive article on the Actor Model is the one published in the book.

What language is he writing that in?

It seems to be made of
CREATE WITH LET SEND CASE OF BECOME END \.:[]#

I guess I can figure out what it means, but it would be easier to not have to guess semantics entirely from context.

Lambda calculus is incapable of implementing actors effectively

The lambda calculus is incapable of implementing actors:
* Some Actor computations are impossible to implement in the lambda calculus
* Many practical Actor systems are exponentially slower when implemented in the parallel lambda calculus.

So wikipedia was wrong?

And there wasn't a problem with scheme per say?

Or maybe Scheme was missing real threads (though a lot of implementations are not).

I'm not sure what "parallel lambda calculus" is, but if something can't be implemented O notation efficiently in a modern Scheme like racket, then it cant be implemented efficiently on a general purpose CPU at all.

Anonymous Adminstrator censorship means articles can be wrong

Unfortunately, because of anonymous Administrator censorship, Wikipedia articles can be uncorrectable.

You can try this out by trying to correct errors yourself.

I don't know enough about the topic to correct it.

I did manage to correct one the other day, but the "error" was a snark introduced by another anonymous person who had an axe to grind. If they were censoring that particular article they wouldn't have had the error in the first place.

You could correct it. Since the article is about you, you could publish something on your own blog then use that as the citation ;)

One gets the impression,

One gets the impression, from things said here (LtU) and things said there (Wikipedia), that Hewitt has some sort of topic ban on Wikipedia; that is, he's not allowed to contribute to Wikipedia on certain topics. If he's complaining about some content and not fixing it himself, likely that's because the content is covered by the topic ban. The implication in such a case would be that they don't trust him to exercise objective judgement on the topic.

Censorship by anonymous Wikipedia Administrators

John,

It seems that you are going on the record in favor of censorship by anonymous Wikipedia Administrators and that you place great personal trust in these anonymous Administrators and their kangaroo procedures.

Why are you not advocating reform of the current dysfunctional scandal-prone Wikipedia organization? For example, what do you think of the reforms that I proposed in Corruption of Wikipedia?

Regards,
Carl

not helping your case

Please stop complaining about Wikipedia here.

Even if you're right (and no one is granting you that), you're way off topic and it's quite tiresome.

I'm well aware of who you are and your accomplishments, but frankly, you need to find somebody with some tact to proofread your written communications and let you know how they will be perceived. A colleague did that for me early in my career and it greatly helped me to be 1) better liked and 2) more persuasive. Better late than never.

Turns out the impression

Turns out the impression I described here was somewhat optimistic. Whatever problems there may have been with objectivity, it appears that's not what he was banned for. His account on Wikipedia (I finally found it) is evidently User:CarlHewitt, and it's been blocked indefinitely for sockpuppetry. That is, for pretending to be a bunch of different people.

Academics and researchers have had hard times with Wikipedia

John,

Academics and researchers have had hard times with Wikipedia by being attacked and vilified by Wikipedia Administrators as documented in the article Corruption of Wikipedia and numerous other publications referenced in the article. For example, a Wikipedia Administrator recently insinuated that I am "Bozo the Clown" on the talk page of my Wikipedia article.

I suspect that Wikipedia Administrators will find it increasingly difficult to maintain their censorship. It is extremely valuable for students to be getting experience with this kind of censorship early in their careers against anonymous Wikipedia Administrators :-) We will see what kinds of means that the students will devise to evade the censorship of anonymous Wikipedia Administrators. This is analogous to students evading anonymous censors working for the Great Censorship Wall of China.

Unfortunately, there always seem to be apologists like you for this kind of censorship :-(

Regards,
Carl

Please help repair Wikipedia biography referenced by John Shutt

Please help repair Wikipedia biography referred to above by John Shutt using Current information on Professor Hewitt.

Because of Wikipedia censorship, it will take some work to carry out the repairs :-( It will be necessary to have an account and do some editing first.

Don't be afraid of Wikipedia Administrators!

note

People, please note, Hewitt is asking you to do something unethical; it's called meatpuppetry.

An encyclopedia that *you* are supposed to be able to edit

Wikipedia is advertised as an encyclopedia that you are supposed to be able to edit.

So, make up your own mind and don't be intimidated by censors, who will often call you names like "unethical" for exercising your rights.

Wikipedia's corrupt nature

Hewitt you aren't the first or last person to ever notice that Wikipedia has some corruption baked in. It has a pretense of a neutral point of view but no such thing exists. It has a pretense of being an "anyone can fix it" but it is dominated by a bureaucracy.

You aren't saying anything controversial calling them out on this.

I don't necessarily agree with the edits you want to make but that's neither here nor there. If it isn't your edits they wrongly refuse, it is someone else's. Sometimes they also let slip through edits that really don't belong there.

Right now, in spite of all that, they are a kind of "worse is better" good enough for many purposes.

In the long run I don't think that org will last but who knows.

Good news is that anyone can "fork" Wikipedia and someday someone will and better governance might happen then.

For now, geeze... I promise you Wikipedia does not have the power to bury you in history or to bury this more recent stuff about inconsistency robustness.

My suggestion is that you just flip them finger and forget about em. Remember you did that. It's just a suggestion, though.

I should know better than to

I should know better than to comment here on Wikipedia and its problems. Oh well.

Wikipedia's corruption is often exaggerated. It's got problems, but beware of people who complain bitterly after being rightly prevented form abusing Wikipedia in some way. Not all complainants are wrong, of course, but if they complain about "anonymous editors", it's a good bet they don't understand Wikipedia.

It has a pretense of a neutral point of view but no such thing exists.

The thing Wikipedia calls neutral point of view more-or-less exists, but isn't neutral. There is such a thing as neutrality, at least as an ideal, and it's a mistake to stop striving for neutrality; but I'd agree their strategy for approximating it is flawed.

In the long run I don't think that org will last but who knows.

I'm not sure if you're talking about the organization of Wikipedia, or the organization of the Wikimedia Foundation. They're different. The Wikipedian community has flaws in its organization, but as I said, they can be exaggerated. The Foundation is a bit clueless; they keep doing stuff they claim (presumably earnestly) is meant to nurture broader volunteer contribution to the project, but because they don't really understand wikis or the volunteer community, they do more damage than good.

Good news is that anyone can "fork" Wikipedia and someday someone will and better governance might happen then.

From what I've heard about forks, the key question is whether the vital spark of the community goes with the fork, or stays with the original project. If there's a successful fork of Wikipedia, it'll succeed because it was initiated by the Wikipedian community — so likely the fork will have happened to get away from the Foundation, not to get away from the community organization. Community organizational reform would have to come from inside the community, and so would not require a fork. (Indeed, looking at the history of revolutionary overthrow of governments, a fork might result in a better organization but could easily result in a worse one.)

You should know better.

If you want to call LtU a community, I think our community issue is to be clear that Wikipedia is not suppressing Hewitt's work. I mean, yes, they are preventing his edits but no --- this is not very important to how Hewitt is recognized in the long run.

We've just been around this track too many times. Wikipedia has issues. Wikipedia really gets under Hewitt's skin. Fair enough.

Blaming the victims of censorship

Since Wikipedia has not been able to reform itself despite numerous attempts, it is unlikely to change anytime soon :-(

It would be nice if the following articles could be corrected so that I would not have to deal with questions and complaints about omissions and inaccuracies:
* Actor Model
* Carl Hewitt
* Logic Programs
* Incompleteness theorems

However, these corrections will only happen when people are willing to counter anonymous Wikipedia Administrators, who are narrow-mindedly censoring the articles. Students have attempted corrections but have been rebuffed.

It seems likely that the censorship will eventually be circumvented because it is very difficult to make censorship air tight. However, in the meantime Wikipedia censors will probably launch a much harsher crackdown because of the publication of "Inconsistency Robustness."

Victim of your own behavior

Have you asked *why* you're being censored? If we agree to call it that (and we don't).

Do you think it's because what you're writing is somehow controversial? If yes, then it's not appropriate on Wikipedia anyway.

Maybe it's because it threatens or frightens somebody. If so, who? Why? Really, I doubt it though.

However, what's the most likely reason you're being censored?

I think you're being censored because you're being an asshole.

You may not think you are being an asshole, but you are. Your tone is condescending and rude. The speed and style in which you respond is irksome. Your responses appear disconnected from the conversation, which is a frustrating to people who are trying to have an earnest discussion.

Being an asshole isn't a good reason to censor you, but the good people of earth try pretty hard to ignore assholes, even when they're in the right (and again, I'm not granting that to you). You will not affect change by being louder and ruder. Try being more charming.

This is the absolute last time I will respond to you. Your presence makes LtU a less enjoyable site for me and several others that I have talked to. Please go away and only come back after you've 1) reflected on your behavior and 2) learned a lesson or two from your more tactful colleagues.

Censorship of information is the paramount issue

Although students tend to feel bad and then become angry when their contributions are censored, I suggest that they not to take it too personally.

People go to Wikipedia for information. As long as students are contributing valuable, relevant information, they are on the right track.

It is important for students to learn never to call anyone bad names, even if someone else calls them a bad name. But they do need to learn how to express themselves clearly and to stand up for important ideas.

Also, they can learn a lot from their attempts to contribute, even if they are initially rebuffed. The experience will serve them well in their careers.

Single UI Thread

Isn't the model for this to have a single thread deal with all UI interaction, and then communicate with your worker thread pool via an internal message port that sends application specific messages. If you had done it like this it should still work, as the processing threads only access objects belonging to the application, and maybe do some IO.

No doubt

they left SOME obscure way open to write multithreaded windows programs.

You'll note that almost no one does that. Firefox locks right up whenever it's busy. Why don't they do it the right way? Maybe they got sick of trying things that should work, but which Microsoft broke on purpose and didn't document.

Common Practice

I don't think what i am describing is some obscure way, its the obvious way to write multi-threaded apps using message passing so that the minimum code depends on the UI API. Its the same way you write apps on Android, so its a fairly common pattern. The point is you don't own the UI objects so they are not safe to share. It may just happen work on the current version, but the API does not promise this, so as you found out new versions of the OS can break the code.

In a way it is all documented, as you have to assume anything not explicitly thread-safe is not. I cannot think of one multi-threading system that enforces that all objects are thread-safe due to the locking overhead.

I can't comment on why Firefox isn't written this way, but if I were writing a browser that's how I would do it. My browser would not lock the UI, and the threading model would work fine on all windows versions (that support threading and message passing) so far, and likely all in the future (because to the UI it looks like a single threaded app). Speculating a bit, people get too carried away with having every part of the application parallel and using shared memory, whereas in fact a single UI thread that hands of work to background workers is fine, and message passing is far safer than shared memory.

I think it's the library that changed, not the OS

Changes to the OS code are a lot more rare than to (one of) Microsoft's much more obscure C++ wrappers for it.

I'm pretty sure that if I had written straight to win32 instead of to Microsoft Foundation Library, everything would have kept working.

But what's wrong with communicating by posting messages to window's UI queue, that should be inherently thread safe and serialized, since each window has a loop that processes one message from the queue at a time. Isn't that basically an actor model?

MFC

Well MFC is not normally thread safe (you have to use synchronization classes). You would have to be very careful not to share objects between threads and hope they have no static properties. Realistically using win32 API directly and writing your own light-weight object encapsulation around the message passing seems one way to go.

It would appear the windows UI event queue is not designed for multi-threaded use, and the only reason you can attach it to other threads is for backwards compatibility. Each window has an associated thread (the one that created the window) and only that thread can read and write messages to that window.

I think its the same an Android. The reason they do it is so that single threaded applications are not slowed down by the overhead of allowing multi-threaded applications.

Hold on

There's nothing slower than a user interface. It's controlled by people clicking mice and typing on keys.

Why the hell wouldn't it be thread safe?

And if it's possible to be posted to from other threads for compatability it BETTER be thread safe because it's operating system level. Are you going to crash the OS by posting a message to a queue?

In the future, user interfaces will be massively concurrent

In the future, user interfaces (using 3D video input and gestures) will be massively concurrent in order to be performant.

GPU virtualisation

I'm not sure how useful that will be. You need some way to share the GPU between processes. Virtualization would allow multiple threads/processes to render at the same time, but has a high overhead. Probably better to have a single system thread that reads render requests from a queue and schedules them into the GPU.

Single threaded UI will not be sufficiently performant

Single threaded user interfaces will not be sufficiently performant for future user interfaces using 3D video input with gestures because processor clock speeds will not increase significantly over time.

Single UI thread fine for gestures.

The UI thread simply handles all the ingoing and outgoing messages. This does not mean all the processing happens on this thread. You receive gesture events on the UI thread, and hand the processing off to background threads. The UI thread is not blocked, it simply receives the incoming message, and dispatches an application message to the relevant thread. Likewise when sending messages to the window it is not synchronous, there it doesn't wait for the window to execute the command, the message is queued and can be processed asynchronously in the operating system by different threads.

What having a single UI thread ensures, is that there is a single consistent model of the order of events for the application, there is no performance problem as all this thread need to is block until there are messages in its message queue, and queue messages for asynchronous processing on other application threads (which will respond to this thread with their own messages).

With regards to gestures, much like a mouse, there would be one gesture recognition process that reads the camera video stream, computes gesture vectors, and sends coded gesture messages to the currently focused applications message queue. The message queue bandwidth, and the processing load on the UI thread would be no more than a mouse.

I don't buy it.

I'm neither convinced that 3D video input will be married to a message queue of discrete events in any way that increases the needed throughput of discrete events to something massive.

Nor am I convinced of the (I'm sorry, it's ridiculous) claim that thread-safe queues are too slow.

There have been non-blocking multithread queue algorithms for decades (I put one in a database server engine a decade ago). And the synchronization instructions needed to implement these algorithms get faster and faster as all systems have converged to cluster-on-one-chip and as concurrent processing got more important.

Missing the Point

The GPU response was not really relevant to our earlier conversation, but all on screen rendering of windows is likely to be handled by the GPU in the future, for compositing etc. Vector based UIs scale to different device resolutions better than pixel based images, and GPUs specialise in rendering and scaling vector based representations.

Ignore the thing about thread-safe queues being too slow. Getting one thing wrong does not refute the whole argument :-)

In any case I agree with the original statement that threads can lead to bugs (after all threads are about sharing, a thread with no shared memory is a process). On that basis using as few threads as necessary is the best idea, and minimising sharing things between threads is also good. This means having a single thread dealing with all UI interaction that hands off work to processing threads is clearly a better model than trying to share UI between threads, when it is not needed for non-blocking nor performance.

Thread Safe

The message queues are thread safe. When you open a window, I believe only the thread that opened the window can receive and respond to messages from that window. One thread can open multiple windows, or different threads can open different windows, but each window must be associated with a maximum one thread.

So my original reason didn't make much sense. It could be that tracking resource ownership across multiple threads is difficult, so they don't want you to do it, or it could be they had a certain use model in mind when designing.

Let's turn this around. Why would you ever want more than one thread managing the UI. Its simpler, and programs are complex enough without adding unnecessary resource sharing.

Aspect oriented

Limited? Of course it's limited. It's a kludge meant to allow a few features that the language designer left out. It's kind of "limited monads for Java" isn't it?

Heh

I dont' think anyone has a clue how to do aspect-oriented. It seems to me there might be something there to find, that simply hasn't been found yet, though. This would indeed imply that trials aren't relevant to aspect-oriented programmaing, since trials can only study what exists, which says nothing about what hasn't been done yet.

It does seem that whenever you need to describe a big, complicated system, looking at it from just one direction isn't going to give you the complete picture; you want to walk all around it and see it from different angles, to understand the whole shape of the thing. From what I've seen of extant approaches to AOP, they're asymmetric: there's one primary view of the system, and then some "aspect" stuff tacked on that's basically meta. Perhaps, rather than separating out some cross-cutting concerns that are fundamentally different in character from the primary description, one should be looking for a whole different way of organizing the entire description, with better aspectual symmetry.

Behavioral programming is to AOP

The work by David Harel on behavioral programming, scenario-based models, play engine (full book) and live sequence charts seems a much more promising approach than conventional AOP.

Rather than interceding at weakly semantic and synchronous "join points" such as method calls, behavioral programming intercedes at a semantic model-events layer and is inherently concurrent. Behavior threads each place a 'bid' describing events they want, events they're waiting for, and events they block. A scheduling thread decides which events proceed, and signals the appropriate b-threads. At least that's the mechanism behind it. All registered threads must place their bid before any thread proceeds, so this only scales moderately, but it's enough for a lot of use cases.

The theory that led to this mechanism is more involved and I haven't spent the time to really study it, but basically it's built around adding 'scenarios' or 'user stories' to a system, where these scenarios can easily involve cross-cutting interactions with multiple objects. I do recommend looking into David Harel's work.

Many grad student careers

Many grad student careers have been ruined trying to find the "answer" to the aspect problem. It is a seductive idea that doesn't pan out in practice. That doesn't mean you shouldn't try, of course. It's just a tough cookie to crack.

A professor asked me one day

A professor asked me one day if I had a dissertation topic in mind (fair question since I was by then already overdue to pick one). I mentioned abstraction theory, which I'd always had my eye on for a dissertation, but added it was a rather large topic, the sort of thing you could spend your entire career on. He said "Oh, one of those." In a sympathetic, matter-of-fact sort of way. When I assembled a dissertation committee he was on it; my committee listened to my tangle of Big Ideas circularly interconnecting to each other, pulled $vau out of the tangle and recommended I limit my dissertation to that.

Same thing happened to me.

Same thing happened to me.

random hare brained scheme

Both side-effecting and nested-calling code could turn into a pure
functional style that never calls down much, so there are only ever 2 frames on the stack: main() and the current routine(). Subroutines return a 'command'/computation/continuation for main() to perform in the next step. Then we have a zillion join points.

Also maybe means we can more easily grok, debug, refactor because we'll make all side effects be purely functional.

The AspectJ myth & its destruction by Hanenberg & others

That's not what AspectJ was sold for. And the problem it deals with, the tyranny of the dominant decomposition, is a real one: many "modules" (routines, classes) deal at once with multiple things (aspects) that don't belong in the same module, if you take separation of concerns seriously. (If anything, monads allow a very limited form of AOP, not viceversa).

However, AspectJ only "solves" the problem if you use the wrong definitions: with AspectJ, you can split different aspects from each other syntactically, but to change anything you still need to look at all aspects. In fact, lots of claims in the AOP literature on AspectJ turn out to not hold up.

Since people used those wrong definitions for around a decade, AspectJ needed so much debunking people did research to bash it, even though debunking papers are rare in PL!

I recommend for instance http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.3963. In fact, Hanenberg himself was first an AOP researcher, then he contributed to debunking AspectJ empirically. After this U-turn—a conversion on the way to Damascus—he moved to his empirical work crusade to rid PL of other such issues. See his publication list:

http://dblp.uni-trier.de/pers/hd/h/Hanenberg:Stefan

Caveat: I haven't checked my hagiology of Hanenberg with him, I just observed the titles of his papers ;-).

Testing languages would cost money

Take the current cross-language speed, memory and lines of code benchmarks. They are done by volunteers so there are very few programs in the benchmark, languages keep dropping out as they forget some when they update the test for a new platform. I'm not sure that the "lines of code" measure is still being done.

It's very crude.

Another note on the presentation, the swipe taken at Boo. It claimed that a quote with what I would call an uncontroversial claim that allowing some gradual typing can be beneficial is "refuted by the evidence." Really? It's not calling for a totally untyped language it's saying that occasionally you might want an untyped routine. It doesn't give you untyped variables by default, and if you assign without a type it infers the type and makes it permanent.

I use dynamically typed languages myself, aware the trade offs, and for small programs maintained by a single person there isn't a problem with them. And I'd take an dynamically typed language for any organization over one too inflexible to allow you to erase a type when necessary.

Also, from this paper I learned that if I ever give a presentation I want a kitten in it.