Experiment

Can we have a polite, orderly and short discussion of the political, social, institutional aspects of the Swift language? This thread is the place for it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Reflection and Optimisation.

The reflection would be for example being able to search for all memory allocations under an AST node, so that they could be grouped together.

If it wasn't a hard problem it wouldn't be interesting :-)

Reflection?

"Reflection" means to execute user code. In most languages, you cannot do that at compile time, nor do you want that. What you maybe have in mind is a form of staging, but that clearly isn't a widely applicable solution (ignoring the question whether it can work at all in this case).

Reflection

Reflection means to be able to examine the code itself or metadata about it at runtime. Compile time reflection would be an extension of that concept, and I'm not sure if there is a more widely accepted term for it.

From an OOPSLA '87 paper

From an OOPSLA '87 paper (Pattie Maes):

A reflective system is a computational system which is about itself in a causally connected way. In order to substantiate this definition, we next discuss relevant concepts such as computational system, about-ness and causal connection.

(Quote saved in my file partly as an illustration of what happens when definitions are written top-down instead of bottom-up.)

Haskell Type Classes

Haskell type classes allow compile time reflection (of a sort), and they do effectively allow a (logic) program to execute at compile time. Obviously it is better if this is a terminating computation. I wonder if applicative functors could be used to the same end in ML?

hand-wavingly like graphics apis' feature bits

makes me think of how opengl/directx have/had "feature bits" to offer a way to decouple the application from the graphics card. can't say it is the best idea...

OpenGL/DirectX

I don't find it a problem using feature detection in opengl. I prefer the pipeline approach where the driver simulates the missing features, so you just program everything the same, but these days nobody has incomete implementations any more, its all about extra features. A lot of extra features require an API change, and so you need some way of telling if the driver supports the new API before you start to use it.

This is different to the memory issue, there won't be new/incompatible APIs, so there would be no need of feature bits.

Instead this would be like Haskell type classes where the type of some code depends on the memory allocations within it. This probably doesn't work, but it gives a flavour: imagine a monad that exposes all the allocations in a block of code as phantom types in the type signature. Now a type class (think about how the HList code can perform Peano arithmetic at compile time) operates on those allocations by changing the type of the code block. It could rearrange the order, coalesce, or group the allocations by changing the type. It could satisfy the memory requirements by removing the allocation from the type, or replace with a new requirement representing the free memory upper bounds required.

To support this the type system would need to provide row-types and type-indexed-types, so that orthogonal requirements could be managed independently by different libraries.

Interesting.

Seems an interesting claim. Where is the bottleneck?

What is "natural" is irrelevant

People actually do think in terms of time and change, and symbol rebinding is just an extension of that.

Whether this is wrong or write is actually utterly irrelevant. You wouldn't argue that math shouldn't be applied to bridge construction on the grounds that so many people are bad at math, would you?

Instead, you would hope that people building bridges are sufficiently trained in math, because it's essential to 'bridge correctness'. People would never have been able to build bridges if they had sticked to just sculpturing mud, even though that is a much more tangible activity.

never bring Python to a Coq fight

I agree with what you're saying.

We shouldn't conflate intuition or "natural" thinking with useful reasoning. It would be nice to design our systems such that intuitions and natural thinking lead us to correct conclusions and good results. But this can be difficult to achieve, and sometimes we must accept that challenging math is the easiest and most effective way (that we know of) to meet requirements.

What you think ideal is

What you think ideal is irrelevant. It's what people choose and prefer to use. Saying that it's a network effect or "good marketing" is just being an ostrich sticking his head in the sand.

Humans also choose superstitious

Humans also choose superstitious and religious explanations for many phenomena. Whether people favor a form of thinking seems a VERY different question and concern than whether said form of thinking is effective and useful for reasoning.

In the current thread, I've been mostly concerned with the latter question, and thus what people choose and prefer seems quite irrelevant (as Andreas mentioned).

You've described your own ideals before, and you're certainly on the 'code is for humans' end of the human-machine scale. You say: "What you think is ideal is irrelevant." The same is true of you. Your agenda is not served by refusing to acknowledge the weaknesses of untrained human thinking.

(To me, it seems very natural to utilize the growth and adaptability of the human.)

Pointing at network effects isn't about hiding one's head in the sand. It's just the opposite - acknowledging that the world isn't ideal, doesn't prioritize what I prioritize. It also allows me to recognize realities that it seems you'd prefer to ignore, e.g. "what people choose and prefer to use" is not merely a matter of what is 'natural' to them (biologically), but also of what is familiar. People cannot try new or old ways of thinking without exposure to them. It is difficult to distinguish intuition from familiarity.

We aren't vulcans yet

I'm under no illusion on why we program the way we do today, and there is no way my work will succeed unless I can include what people find usable (imperative effects) and remove what's harmful to them (sequential semantics for state update). And if I screw it up, then I'm wrong, not just unpopular and bad at marketing.

As long as we keep programming, language should fit the way we think, not the other way around. We understand change and are equipped to deal with it, more so than we are with dealing static abstractions of change. It's the reason we designed computer hardware the way we did the reason why designed our languages likewise. Von Neumann wasn't a slickly salesman who hated mathematicians. Registers and memory were just natural.

You might be right in 100,000 years (about the amount of time we've used spoken language, arguably what sets us apart from all other animals) but 2000 year old math right now is a tool that is in no way intuitive and not how we understand problems directly. It takes effort to avoid describing things with change. So much so that entire papers are written on how to solve simple problems (for us to talk and think about) in Haskell.

When the computers take over most programming tasks in 20 or so years, then they'll probably transcend inefficient flawed human reasoning.

As long as we keep

As long as we keep programming, language should fit the way we think, not the other way around.

That only seems justifiable if the results of matching programming languages to human thinking exceed the results from the contrary. I don't think the outcome is so obvious as you seem to imply, namely, that no language requiring training in non-human thinking can "outperform" all possible languages that fit the way humans think.

"Outperforms" can be an arbitrary metric, like bug count, not necessarily program performance.

Red herring

Hm, so let's see what humanity has achieved technologically in the last 2-3000 years vs the 100 000 before that. Considering that, what is your point?

PLs should fit the problem, as should our way of thinking about it. That requires training. Assumed notions of naturalness are just as much a dubious argument as in certain socio-political debates.

There are many ways to fit a

There are many ways to fit a problem. It makes sense to select a way that is more natural for humans. Perhaps natural isn't the right word for it, maybe 'easy' or 'closer to something we already know' is better. As a silly example, programming in Java is much more "natural" than programming in ROT13 Java, even though it's just a permutation of the alphabet.

On the other hand there are universals shared across cultures. For example: Bouba/kiki effect. So maybe the term 'natural' isn't so bad after all. That experiment suggests that if you asked a person who is not familiar with our alphabet to match D and T to their sounds, they would choose the correct answer most of the time. So maybe even ROT13 Java is truly less natural, rather than something we're simply not familiar with.

the part that fits the person

Bret Victor described a hammer simply: the head fits the problem, the handle fits the human. Our computational tools similarly need to meet us half way.

Of course, hammers are sometimes problematic and awkward for humans. One can miss, strike one's thumb, throw the hammer accidentally. It takes training or practice to use one effectively and safely (especially if you have a team of hammer wielding humans in close proximity)... and a hammer is one of the least sophisticated tools (ever).

Sean's philosophy holds that our PLs should fit how we naturally think - complete with cognitive biases, fallacies, and so on. The weaknesses of human cognition can make it difficult to fit the problem: making a machine behave in a useful ways, typically as part of a larger system that exists outside one's delusions of control.

I consider my own position - meeting the machine (and larger system) half way - to be reasonably balanced. Rejecting Sean's extreme does not imply advocating the other extreme.

It is one thing to say "I

It is one thing to say "I disagree" and another to blame failure of an idea to take hold on irrational human behavior, bad marketing, or immovable critical mass (network effects).

Most people, programmers even, aren't mathematicians and won't ever become them; so it is quite reasonable for them to favor non-formal ways of doing things...symbol rebinding makes perfect sense in that regard (x was 0 here, now its 12, ok, no problem!). Flawed human beings? Perhaps. But it is ridiculous to ignore them or brush of their biases as being unreasonable.

not ignoring bias

it is ridiculous to ignore them or brush of their biases as being unreasonable

You've got that half right. It's a bad idea to ignore or brush off human biases, prejudices, fallacies, etc.. Where I disagree with you is how to address those concerns, e.g. to create a an environment that helps humans recognize their errors in reasoning, and that helps them gain sound intuitions and understandings useful in avoiding errors. I don't see utility in hiding from humans the fact that machines don't think like humans.

I don't expect people to become mathematicians. You and I both have a goal of lowering the bar for programming, making it more widely accessible starting at younger ages. However, I do expect humans to learn and grow - machine and system-oriented concepts such as composition, concurrency, confinement, extension, security, partitioning, resilience, latency, and so on. My impression, from your exclusive focus on 'natural' thinking, is that you expect users to stagnate.

Anyhow, pointing at network effects isn't about 'blame'. Just about recognizing forces. Consider: your own philosophy - that languages should match how humans think - is not realized by any mainstream PL. Most modern PLs force humans to think like a machine. Worse, they often have many irregularities and discontinuities that make this challenge more difficult than necessary. Are you blaming anyone for this? I doubt it. Why would you bother? But it can be useful to recognize the reasons - such as network effects, the historical requirement for performance, etc. - for use in a future adoption strategy.

And I maintain that symbol rebinding isn't more 'natural'; it's just unnatural in a way you're personally more comfortable with than immutable symbols. If you want natural notions of movement, consider modeling a character picking stuff up (cf. toontalk).

I once had the pleasure of

I once had the pleasure of attending a WGFP meeting with all the very smart Haskell people (from SPJ and many others), I realized that they thought very differently from me and probably most other people. They had "transcended" major limitations in human reasoning. Their vocabulary, their designs, what they saw as elegant, were all very foreign to me. I thought "this is what it is like to live on Vulcan, I guess."

But I also thought that this kind of enlightenment was un-achievable to most of us; perhaps it is not even a desirable goal: our passions often lay elsewhere, computer programming and formalisms are just tools to us, not a way of life. The computer should accommodate how we think, not push us towards evolving into something else. There isn't that much time (on a human evolution scale, anyways) till the computers take over programming anyways, so why bother adapting?

Rather, we should focus on leveraging our built in linguistic capabilities, which is why I'm big on objects, identity (naming, not f(x) = x), encapsulated state. And like it or not, humans are very proficient at describing how to do things step-by-step (cake recipes) and not very good at being declarative (here is a function from ingredients to cake). Finding the nice elegant declarative solution to many problems requires...being a Vulcan and pondering the problem for a long time (e.g. so use parser combinators rather than recursive descent), but often "worse is better," we just need to get things done.

So you are correct: I've decided that the way things are done isn't that far off from the ideal for most programmers. So rather than invent crazily different computational paradigms, perhaps I can sneak something in by changing their programming models a bit (e.g. via Glitch). Such solutions seem to get a lot of mileage (also consider React.js), the very essence of worse is better, I guess.

Both Machine and Human Prefer Mutability

How is immutability half way between a machine, which implements things using mutable memory and registers, and a human, which may find such things a more natural way of thinking? In this case both the head and handle of the hammer prefer mutability.

misapplied intuitions; needs vs. wants

I've not suggested that humans are uncomfortable with mutation. Rather, I've said we're bad at reasoning about state... especially as we scale to having lots of stateful elements with a combinatorial number of states and conditions for achieving them. Therefore, we should avoid unnecessary use of state. Our comfort with state is mistaken. As a hypothesis for why, perhaps we have an intuition (from physical experience) that we can intervene when things progress in an unexpected direction, and this intuition is simply misapplied in context of automation.

In a more general sense, the whole premise of your argument is bad. Meeting humans half way isn't primarily about preferences.

Humans often want things that aren't so good for them in excess - e.g. french fries, beer. Meeting the human half way involves meeting their needs, not just indulging their preferences. Discouraging non-essential use of state by the program is part of that. Of course, there are contexts where humans could have all the state they want... e.g. manipulating a document or live program, where they really can intervene if things progress unexpectedly.

On the machine side, state can hinder optimizations and other features, but in general we can work with it. Avoiding state is more about meeting human needs than machine needs. I would suggest seeking alternatives to imperative state models, which do not generalize nicely for open systems, concurrency, networks, computational fabrics, etc..

We're not bad, it's hard

we're bad at reasoning about state

That's not even the core of the problem. It's not that "we" are bad at reasoning about state, it is that reasoning about state is inherently hard -- for humans as well as for machines. Any adequate model becomes an order of magnitude more complex once you introduce state. Some humans may perceive state as easy, but that's nothing but a fallacy. In reality, it's totally not.

I look forward to reading

I look forward to reading "How to Bake a Cake, Declaratively" by Andreas Rossberg. The book shows how baking is much easier explained when step by step instructions are avoided and everything is described via compositions instead.

You jest, but...

I'd love that format:

cake = bake 300deg (insert 12" round tin (mix [2 eggs, ...]))

It should of course be visually laid out. Text expressions are usually only preferable in the presence of variables and non-literal expressions. Each node could have a time estimate. To provide ordering hints the whole graph could be laid out so that time is the vertical axis.

Let me know when this book is available, Andreas.

postmix

It should of course be visually laid out.

Or at least, if presented using text it should use reverse Polish rather than Polish notation.

Still missing the point

Your underlying assumption that baking a cake is in the same problem class as creating software explains a lot. ;)

We were talking about how

We were talking about how humans naturally think about and solve problems. You made the argument that static reasoning was the norm for us, but everything we do is saturated in juicy mutable state.

No

I made the argument that reasoning is needed for any serious engineering, and that in computing, mutable state makes that substantially harder. It is a (dangerous) fiction that software engineering has any noteworthy resemblance to everyday life.

Harder is not an argument

A C/C++ programmer is paid to solve problems, not to avoid them.

Shared mutable state is hard. Hard but necessary. So what? Get an education, read wikipedia, buy a book, use a library, buy an OS.

I, too, heard the legends

I, too, heard the legends of this one mythical überprogrammer who was actually able to reason -- almost correctly -- about the memory model of the C++ code he was writing. At least during full moon.

Out of arguments?

Yeah well. If we're out of arguments and the gloves come off, I know a few of those too:

Abstraction exposes the mind of the narcist simpleton.

You want it neat and simple. There is no problem with state, it's a convenient manner of programming. There is hardly a problem with mutable state, you can learn or buy solutions.

It lives in your head. 99% of programmers doesn't care. Or Googles a solution.

(Ah well. I could have read your comment positively too. Given the popularity of C/C++, the question never was why C is that bad, the question was why C is that good. And part of the answer is that it makes it possible to break every abstraction; sometimes by default. And academics have a hard time understanding that.)

Ah, now I see

And all my coworkers will be relieved to hear that our C++ code breaking left and right every day was a problem only in our heads, or that we have just been too incompetent to search for solutions. And gaping security holes are just an academic problem anyway, we should stop caring.

You're not going to solve it with applied philosophy either

So what. A thousand applied philosophy professors scoffing at underpaid and undereducated device driver engineers about programs they couldn't have written themselves aren't going to solve the problem either.

And part of the answer is

And part of the answer is that it makes it possible to break every abstraction; sometimes by default.

I'm very skeptical that breaking abstractions are the reason C is still used. Fine control over memory layout and allocation are the only reasons I still drop down to C, ie. writing interpreters that use tagged pointers, or the default inline allocation of structs vs. the typical heap behaviour of structures in every safe language (Haskell can do this too, but it's not as simple as in C).

There's no other compelling reason to use C that I can see.

Breaking abstraction allows for machine aimenable machine code

I nowadays assume that one of the reasons Algol/Pascal/Ada languages didn't overtake C as a language is that C exposes internals of modules therefore allowing programmers to code against an informal specification of the module implementation.

I'll never be able to prove that assumption but it is what I believe. In the end even the Algol languages resulted in applications which died of too much abstraction, i.e., resulted in too slow applications.

skeptical

I'm skeptical of claims C's success stems from breaking abstractions or speed considerations. I figure it's got the same two basic advantages as most populist languages: it's got a simple lucid model and it allows programmers to "just do" what they want to do without a lot of red tape. BASIC and Lisp are/were like that too.

C is a populist language?

Are you kidding me? C is the only language which hardly needs to advertise.

Comparing C to BASIC and Lisp neither makes much sense. Why it lost from Ada on the OS front is more interesting but an endless debate I am not that much interested in.

Lisp is an operational model. Not a language.

C is the only language which

C is the only language which hardly needs to advertise.

Not the only one; and yes, this is what I mean by a populist language.

Comparing C to BASIC and Lisp neither makes much sense.

On the contary; these are languages capable (in context, which for BASIC includes culture of an historical era) of attracting communities of fans who use them because they like them. Most (though not all) languages manage to attract some fans, but not like these. Turbo Pascal had its day, too.

Lisp is an operational model. Not a language.

As you wish. Though for the purpose I was addressing, operational model is the key feature of language. How narrowly that defines a language depends on the language; the magic for Turbo Pascal was narrower than the broad "language" Pascal, the magic for Lisp cuts a much broader swath.

Popular vs Populist

C deservedly takes the lead of PLs, together with Java and C++, in the PL field. It is a popular language since it solves the problems in its wide-scoped problem domain rather well without real contestants. It is therefore popular and doesn't need populist arguments.

Functional languages often fall in the populist category since they are partly supported by the thousands of straw man arguments sold amongst academics, partly to students, without anybody in the room knowledgeable enough to come up with a proper defense against those arguments.

Liking a language isn't an argument in industry ultimately. I like Haskell, well it's pure core, but I still feel I should kick one of my students butts for deploying it in a small industrial setting. It's a waste of his customers money.

Populism

Liking a language isn't an argument in industry ultimately.

Yup. Which is sort of what I was aiming for with the term "populist". The "popularity" of some languages is not driven by what we're told to use; it's a grass-roots thing.

Driven by necessity

Lisp is in constant decline. It is mostly supported by its history, supported library and development base and lack of competition in its niche.

A grass-root argument doesn't take industry very seriously, you must be an academic.

Industry employs almost all students, even PhDs. Almost all intellectual capacity is there and most people don't give a hoots about an academic argument after ten years.

Grass-roots... Yeah, some budget may go into the direction of an experiment but ultimately, it'll be whether you can sell a product on time.

Haskell in an industrial setting will probably deflate to slow, badly written, Bash code under most circumstances. Lisp's time is gone apart from some data processing applications. If either language makes more inroad it's because of a lack of proper languages.

Google trends

populism and industry

A grass-root argument doesn't take industry very seriously, you must be an academic.

I did suggest C's success might stem from two things that make it what I called a "populist" language — a simple lucid model and freedom from red tape. I did say populist languages get their fan base from people liking them. I said BASIC, Lisp, and Turbo Pascal are other examples of such languages. Pretty sure we're all aware BASIC and Turbo Pascal had heydays now past, don't think I remarked on Lisp's history. Haven't so-far addressed the relationships, in general or in these particular cases, between populism/fan base and industry.

What do you figure I'm arguing, that you figure doesn't take industry very seriously?

You are using different dictionaries and metrics than I am

Populism according to Wikipedia

Populism is a political doctrine that appeals to the interests and conceptions (such as fears) of the general people, especially contrasting those interests with the interests of the elite.

Or according to dictionary.com

Pop·u·lism [pop-yuh-liz-uhm]
noun

  1. the political philosophy of the People's party.
  2. ( lowercase ) any of various, often antiestablishment or anti-intellectual political movements or philosophies that offer unorthodox solutions or policies and appeal to the common person rather than according with traditional party or partisan ideologies.
  3. ( lowercase ) grass-roots democracy; working-class activism; egalitarianism.
  4. ( lowercase ) representation or extolling of the common person, the working class, the underdog, etc.: populism in the arts.

I have already stated that C isn't populist, which is according to most people a derogatory term, but popular simply because it's the best language for a wide-scope solution domain according to a large number of metrics.

I did suggest C's success might stem from two things that make it what I called a "populist" language — a simple lucid model and freedom from red tape. I did say populist languages get their fan base from people liking them.

Populism? Fan base? Speed not important? Comparison to Basic and Lisp? Lucid model? Are you ******* kidding me? There are a thousand reasons to chose for C without being a fan which I am not.

Why in God's name do you think your phone works? Because of 'populism', the 'fan base' of the language, and because 'speed isn't important'? The maturity of Basic and Lisp? Do you think people hobby phones together?

I am sorry but you live in a completely different universe from me.

There is no alternative to C, or Java/C++, for a large number of application domains and the academically pushed populist champion Haskell is by a large number of metrics one of the absolutely worst languages ever devised. Not to mention ML. Basic is a toy and Lisp probably shouldn't have been invented.

Actually (though the lexicon

Actually (though the lexicon stuff seems a distraction), my sense of populism is rather opposed to elitism, and is thus closer to the Wiktionary sense —

populism 1. (philosophy) A political doctrine or philosophy that proposes that the rights and powers of ordinary people are exploited by a privileged elite, and supports their struggle to overcome this.

I'd tend to describe Haskell as an elitist language rather than a populist one.

It seems you're misunderstanding me on a number of points, but I'm guessing there's a common cause to all of them, some underlying mismatch of perspective. I wouldn't say different metrics exactly, because the difference appears to go beyond merely what measure is being used, to what is being measured. As best I can tell, you're focused exclusively on what is successful in industry, and having chosen to ignore everything except industry you reach the conclusion that industry is paramount (a conclusion rendered trivial by the exclusive focus). I'm no less interested in industry, but I'm also interested in some other aspects of the situation whose interaction with industry may be enlightening. One can't be enlightened by meaningful interaction of something with industry unless one first admits that something exists that isn't industry.

My sense is that my attitude toward speed is probably pretty much consistent with the worse-is-better essay (which I do think had some valid points here and there, even though I feel it was also fundamentally flawed), Excessive slowness (whatever that means in context) seems like it ought to retard the popular success of a language, but I'm not so sure blinding speed is in itself a good predictor of success.

Opprobrious academic claims and simpleton arguments

Some underlying mismatch of perspective

Yeah, well. I've had it with many opprobrious academic claims regarding the state of industry. This is mostly Haskell, on G+ I sometimes see a "why are we still stuck with buffer overruns and null pointers" posted by Haskell proponents. Well I know why and it looks idiotic if a proponent doesn't. And then when I look at Haskell I can only conclude it's just a bloody bad language, even in comparison to C.

As best I can tell, you're focused exclusively on what is successful in industry

I am not. I am interested in what works and why. You seem to reduce the argument to simpleton academic statements regarding some popularity contest which doesn't exist; another straw man. A language is evaluated by its merits according to metrics; the ultimate metric where all other metrics coalesce being whether you can make money with it.

blinding speed is in itself a good predictor of success

You don't seem to read very well. It depends on the application domain; but speed tends to creep in and effect most applications. You wouldn't want a machine with device drivers, or the OS, written in Haskell, Basic, or Lisp.

If speed is less important in an application domain then you might use it. You end up with applications like www.checkpad.de

Yeah well. As much as I like the effort. It is highly unlikely they are building a better product because of Haskell. I expect they are working around user interface, database, network connection issues with cumbersome code and the little piece where Haskell shines (abstract imaging code) doesn't even make up for the expense they pay for using the language. They'll be having a blast getting it to work despite the language but likely from a software engineering POV the code will be a complete mess. (I could have the same conversation about Ocaml; as a language it is a lot better but I doubt the operational model and, come on, a syntax which belongs in the 70's?)

What I didn't say

It seems you're ascribing positions to me that I haven't taken and aren't mine. Stuff you have a chip on your shoulder about, seemingly. Just for example: you go on and on about Haskell, but I've said next to nothing about Haskell, and what I have said has been negative. I've also said little about industry, really, and what I have said doesn't support your strong accusations of simplisticness.

Then use the right metrics

Then use the right metrics. And populism isn't one of them. And performance is usually a key issue.

This topic could afford a really interesting discussion...

It doesn't even make sense to talk about whether or not populism is a appropriate metric, since it's obviously not a metric. Nor have I taken any position, thus far, on whether or not populism is, in itself, a factor in commercial success of a language. As for performance being usually a key issue — there are lots of nuances to this point, but likely the first thing to keep in mind is that the significance, and even meaning, of "performance" depends on context. I recall someone remarking to me a while back, regarding efficiency of some javascript code, that if you're that worried about efficiency, and you're using javascript, you're doing something wrong.

LtU fails to load on iOs with Mathjax

Well. You're forced to use Javascript on the Web. But yeah, read title, that's the problem I have with LtU nowadays.

If you go straight to

If you go straight to /tracker, it doesn't hang.

Mathjax, javascript, Lisp...

First I've heard of this problem with mathjax. Don't myself own anything with ios; does it not load the page at all, or does it semi-gracefully fail to format the mathjax?

Yeah, you're forced to use javascript... which says something interesting. (There's a deep attitudinal commonality between javascript and Lisp, which I suspect many in both communities wouldn't want to admit; but that's yet another tangent.)

A C/C++ programmer is paid

A C/C++ programmer is paid to solve problems, not to avoid them.

They also get paid to create problematic (e.g. buggy or unmaintainable) solutions, and yet again to solve those problems they just created, and eventually yet again to scratch a failing project and rewrite it. Programmers don't have much economic incentive to avoid creating problems.

This is hardly a unique circumstance in human society. Something similar could be said of industries that pollute the air and waters, damage the ecosystems and exploit their laborers in the name of efficiency, progress. They're not paid to avoid creating problems. Market forces typically don't favor what is good for most people participating in it.

Shared mutable state is hard. Hard but necessary.

Some state is essential, necessary - i.e. when it is part of the problem domain. We can't implement a text editor without state, nor an IRC, nor an MMORPG.

But we can also use state where it is unnecessary, and some languages - like C and C++ - encourage (via path of least resistance) use of significantly more state than is necessary. Many projects suffer a slow, suffocating death by accidental complexity and bugs that can be traced to non-essential uses of state... especially together with concurrency, distribution, or other exacerbating factors.

Get an education, read wikipedia, buy a book, use a library, buy an OS.

Been there. Done that. Are you suggesting these behaviors significantly simplify reasoning about combinatorial states and eliminate state bugs in automated systems? If so, how?

There are probably around 30,000 synchronization points on my OS

And it works. Rather flawlessly.

It's an engineering problem, not a language problem. It doesn't even look to be a hard engineering problem; despite the academic myth. You use an idiom and solve it.

There are some reasons to use more abstract languages. But avoiding state isn't one of them.

You must have a short memory

You must have a short memory. The only reason the OS you're using today is as good as it is, but not flawless, is because of all the static analysis work that goes into it, both in-house (at Microsoft), and by external parties (Linux). The days of Windows 95, 98, and blue screens should not be so easily forgotten (or Mac OS 9 and earlier the Apple fans).

Even Linux has plenty of examples of deadlocks, corruptions and security vulnerabilities in its history, and they continue to this day.

Idioms help, but there are no assurances that those idioms are being used correctly, and whenever an idiom changes, as it has in Linux with the more widespread adoption of read-copy-update, there are plenty of places for mismatched assumptions to creep in.

You live in the past

OS writers need to handle complexity and sometimes implement primitives themselves. No doubt people will even make the same mistakes over and over again.

Avoiding the management of, in my view necessary, complexity is not an argument to propose a non-solution. The whole problem of shared mutuable state seems to have deflated to one for some engineers who do the heavy lifting. Again, it's not an argument to develop, or support, another language.

That is a very strong

That is a very strong statement, do you have any citations for it? Sure, Coverity and other static analysis tools have found bugs in Linux and Windows, but to claim this is the only or even main reason is weird. Infamously self promoting academics non-withstanding.

Win95 and Mac OS 9- didn't even leverage protected mode very well. Win NT and OS X were modern kernels. Also, kernels haven't changed THAT much since the 90s, so they reap the benefits of stability (and have even become low-value commodities to be shared and given away...).

Evidence: there is very little kernel research in the systems field today. Anybody wanting to write their own OS would be laughed out of the grant process. Why? Because, unlike the 90s, the OS kernel field is extremely mature and stable; people have moved on mostly to distributed computing.

I thought it was common

I thought it was common knowledge that Microsoft employs significant static analysis, including providing analysis tools to hardware vendors to verify their drivers. This began after the Windows ME debacle IIRC.

Win NT and OS X were modern kernels.

Debatable. There wasn't anything significant in either kernel that wasn't already 20 years old.

Also, kernels haven't changed THAT much since the 90s, so they reap the benefits of stability (and have even become low-value commodities to be shared and given away...).

The 90s saw the advent of ultra fast low TCB microkernels like L3, L4 and EROS, and the invention of the exokernel, all of which ultimately culminated in hypervisors like Xen. I hope you agree hypervisors had quite a significant impact on industry, even going so far as to impact the design of the x86 instruction set with virtualization extensions.

But I agree that kernel commodotization has ancillary benefits for industry. And certainly old kernels are more stable given the exposure they've had. But as we all know, tests can prove the presence of bugs, not their absence. The L4 and EROS work has shown that these kernels have intrinsic vulnerabilities by design.

I'm not sure what my claims have to do with systems research or the grant process. My claim was only about scalability of manual composition of abstractions with no automated verification of resulting integrity.

Edit: it's also amusing that it took less than 5 years to show up Rob Pike's claim that systems research is irrelevant (hypervisors and virtualization adopted industry-wide). The classic OS structure running unmodified, arbitrarily unsafe code has been pretty thorougly investigated, hence your skepticism, but there are many alternative OS designs that have yet to be fully explored. For instance, operating systems that can only run restricted types of code is still an open arena, of which MS's Singularity was one of the first examples, and Google's NaCL is perhaps the latest incarnation. Proof-carrying code will be another approach in this vein when it matures. OS safety and language safety is converging in many ways, so systems research can overlap PLT research.

Even if they do it isn't an argument to get rid of state

The machine loves state and so do programmers. Except for Clean, and even they do uniqueness typing, I don't know any language which tries to get rid of it.

Again, the fact that you need to manage state, sometimes with a static analyzer, isn't a good reason to develop languages which annihilate it. (Though you can do it as an academic experiment, but in my view these experiments are mostly negative results.)

No one in this thread

No one in this thread suggested eliminating state.

Varied reasoning and moving goalposts

There was some varied reasoning about what to do with state.

The end result is that it's there (and that probably OO handles it best.)

Static analysis has helped,

Static analysis has helped, but it by no means is/was a silver bullet! Bug counts are easy to publicize, however, so that magnify their impact a bit, but there are plenty of other things happening at the same time that are equally influential, if not more so. OS just became kind of boring as kernels stabilized drastically, and people moved on to virtualization and/or distributed systems. My only point with the grants and stuff was to demonstrate how mature it all became.

I went to that talk, I got into an argument with Rob Pike over Java afterwards (completely unrelated to this topic). Anyways, I think systems research as was done did die, but system researchers are pretty versatile at moving to the next thing.

Check this out

Nice, though there are still

Nice, though there are still many imperatives in there.

It's hard to talk to your partner

Therefore you should dump him, or her.

Straw man argument

By your reasoning complexity analysis isn't math.

But I mostly find this a humbug discussion. Program variables were introduced because people needed to name memory locations of the machine; only bad programmers have a real problem with that abstraction.

Today I had a bread in my shopping bag; yesterday it were five oranges. You really have to be entirely stupid not to understand how that works.

Thanks for properly

Thanks for properly categorizing your straw man arguments.

Well. At least I present arguments.

People think about changing stateful things, mostly the people around them, all the time. It's where all the mental effort goes.

Stones are taken for granted.

So far your attempt to shoehorn programming into mathematical static notations have only resulted in arguments bordering on delusional.

your attempt to shoehorn

your attempt to shoehorn programming into mathematical static notations

To what are you referring? (I'm not aware of any such attempt.)

arguments bordering on delusional

Any argument for change, as though people would change their behavior based on arguments, is more than a little delusional. OTOH, pointing at what we've got today and claiming there are no significant problems would be similarly delusional. ;)

Well. It ain't delusional to ask for more money

your attempt to shoehorn programming into mathematical static notations

This was a referral to your comments on logics, math, and equational reasoning.

Any argument for change, as though people change based on arguments, is more than a little delusional. OTOH, pointing at what we've got today and claiming there are no significant problems would be similarly delusional. ;)

There are always significant problems. But at the same time, my iPad works fine, my old cellphone works, my TV is digital and never blacks out, Fedora runs okay on my computer and Google Chrome hasn't crashed in over a year. That while buildings and bridges collapse on a regular basis. (And also, snarkily, Ocaml runs out of stack space and Haskell doesn't perform.)

My philosophy professor once pointed out that the western world is in a "philosophical crisis." That translates to: I want more (research) money.

The quest for better abstract languages, or formal correctness, by pointing out problems is often, sometimes mostly, a proposal for research funding. That made sense to decision makers when Window 3.1 was crashing all the time but makes less sense these days.

We need better languages but imperative is here to stay. And they'll probably come up with other narratives for research money since Dijkstra's punch card problem is gone, as is Windows 3.1.

This was a referral to your

This was a referral to your comments on logics, math, and equational reasoning.

Those are useful forms of reasoning, and we'd be wise to leverage them where we can. But I have not suggested forcing all subprograms into that mold. And there are other useful forms of reasoning (e.g. compositional).

my iPad works fine, my old cellphone works, my TV is digital and never blacks out, Fedora runs okay on my computer and Google Chrome hasn't crashed in over a year.

What does 'works fine' mean to you? Are you suggesting your hardware is working near its potential? Or that you have no complaints as a user? Or merely that you can eek some useful work out of it, and you've accepted the effort involved?

To me, it seems my hardware is trudging along, working around awkward artificial walls between applications. As a user, I have great difficulty moving data between applications and services and policies and alerts. The system is inefficient, inflexible, fragile, and has lots of jagged edges to cut oneself upon.

I would like to leverage multiple applications to achieve some ad-hoc purpose, then abstract the combination in a robust way for reuse, refinement, and sharing. But that ideal is very far from what we have today, far from what is feasible with modern mainstream PLs.

imperative is here to stay

Of course. The relevant question is whether it will be pervasive or marginalized, primary or secondary. Imperative has many weaknesses other than just being difficult to reason about.

The quest for better abstract languages, or formal correctness, by pointing out problems is often, sometimes mostly, a proposal for research funding.

Perhaps a few people are seeking funding and using PLs as the excuse. But it seems to me that money was not a significant motivation for developing projects like Agda, Coq, Idris, BOBJ, ATS, Elm, and so on. Is there a particular project you're thinking about?

Depends on your perspective

Everything industrial or academic is driven by money. They all go through the same route, from reviewed proposal to project to end-result to evaluation.

I have a former student who I failed to inform I should've kicked his butt as an engineer for not acknowledging the agendas around him and using an academic language in an ill-suited industrial setting.

Money is often the fuel, but

Money is often the fuel, but is not always the destination or purpose (not even in industry). I acknowledge that progress is often constrained by money. But I don't believe money is the goal for most PL research, nor that pointing out problems is primarily a plea for funding.

The only money-oriented programming language I'm aware of is Java. :)

There's money involved everywhere

The only not money-oriented languages I'm aware of are the esoteric hobby languages like Brainf*ck. C# needs to make Microsoft money somewhere. GHC needs to be a success or they won't receive PhD funding anymore, their scientific programmers are taken away, and salaries don't go up anymore.

If you want a research grant you'll need to fight for it. If you want people to take your research, or field, serious in the long run you'll need to convince policy makers it is a serious field.

There's always politics and money involved in keeping any language, or research field, alive. Hence you'll hardly see any truly acknowledged negative results or open academic discussions.

"The world has a philosophical crisis" and "We'll fix software engineering with mathematical languages" is the same sales-talk to me.

I don't copy any academic view unless I understand the agenda. I do support most research somewhat but it does mess too much with the students. They need to unlearn too much in industry.

supporting research

I do support most research somewhat

Now you've got me curious. After all this talk of money, how do you support research? Your usual brand of armchair cynicism? or actual cash?

I don't count but I would appreciate an open debate sometimes

Well. I do support most research morally, although my vote doesn't matter.

But what I do think is that there is a discrepancy between what people on LtU are being told from either teachers or rather vocal PL evangelists, the papers they read, and internal debates by those in charge.

People in academia can hardly be called stupid and they'll have their own private discussions we don't have access to. It's a shame really because I am a bit tired of reading between the lines.

Always be critical and never

Always be critical and never trust authority outright. If you can think on your own, then it doesn't matter what the evangelists say; you just have to learn to distinguish between "evidence" and "anecdote."

I was lucky to grow up in a pragmatic CS program that didn't spew much dogma. We learned functional programming, we learned pure OO programming (Smalltalk style), but no judgments were made in anything being "the way." It was just pure enrichment.

Human factors

I agree with you, but what is natural is quite important to human factors. A bridge that is physically sound but scares the hell of out people is not useful.

Abstracting from Gravity

Abstracting from gravity will only create bridges which crumble under the mass of their load.

scary bridges

Just the opposite. Scary bridges are scary, but still useful. Whereas unsound bridges that look nice are more deceptive and dangerous than useful. If you're the type to get this backwards, please do so on your own property and use plenty of warning signs to keep guests and trespassers safe.

An interesting phenomenon is human adaptability. What is 'scary' depends on one's experiences. One can become comfortable with a scary bridge, knowing that it is sound. Similar applies for human factors in other aspects. What is "natural" is significantly a function of what is "familiar".

I believe we should leverage human adaptability, have humans meet the machine half way - even if they're a little scared or uncomfortable at first. If we find some absolute limits on human adaptability, we can address those as we find them.

It was a metaphor

Don't take it too literally. Obviously I wasn't suggesting that unsound bridges are ok.

Hm

I was not suggesting to transform

var x = e1 in
x = e2(x);
x = e3(x);

into

let x1 = e1 in
let x2 = e2(x1) in
let x3 = e3(x2) in

which I agree is often painful (even though a monadic abstraction can help), but rather

let x = ref e1 in
x := e2(!x);
x := e2(!x);

(which can also be interpreted monadically using "Lightweight Monadic Programming" techniques, if that's your cup of tea, or typed in a reasonable effect-typing system.)

I'm assuming the last

I'm assuming the last assignment is to x, not x3 right?

Loss prevention

Say I have a mutable variable X and use SSA to transform it into XA and XB. If I run a dependency tracker, I lose the fact that XA and XB were the same X before, and am now tracking separate XA and XB variables! That I was reading and writing just X is semantically useful information, and we've lost that by using immutable variables.

You haven't "lost that by using immutable variables", you've lost it by performing a lossy transformation. Nothing stops you from using immutable variables but retaining the information about the relationship between them.

What's the half-life of

What's the half-life of company sponsored languages versus independent languages? (C & C++ are, in this context, independent languages; they weren't designed to achieve developer lock in).

Apple seeks to raise development costs

Apple maintains itself as the monopoly retailer of applications on some of its devices, notably phones and pads.

As retail businesses go, the app business has some weird properties:

1. It operates on consignment. Apple acquires its inventory of apps at roughly no cost.

2. From Apple's perspective, there is no difference among apps other than price. For the app store, all apps at a given price comprise a uniform commodity of undifferentiated units. Apple makes just as much money no matter which one you buy.

3. Apple's cost per sale is approximately invariant regardless of application price. For example, Apple does not have to pay salesmen to spend a lot of time closing sales on high-end apps. It costs Apple just as much to sell an app for $1.50 as it costs to sell an app for $15.00.

4. Apple's goal is to try to maximize its total commission on app sales and to minimize its total cost of sales. This can contradict the goals of app developers. You can see this by asking how much $1M in app sales earns Apple. The answer depends in part on the price of the app. $1M in sales for a $5 app is more profitable for Apple than $1M in sales for a $0.75 app. In fact, $1M sales of a $5 app may be worth more to Apple than, say, $1.1M in sales for a $0.75 app.

Thus, Apple can in some situations have incentive to reduce the overall volume of sales -- against the interest of developers taken as a group -- if that reduction in sales volume nets Apple higher profit.

You might ask: Why wouldn't Apple sell both? $1M from the $5 app and $1M from the $0.75 app? Apple, in fact, does this to some extent. Doesn't it always work this way?

I'd argue no: Apple needs to actively limit the sales of low-end apps.

Here is the problem for Apple: Suppose that the marginal cost of porting a multi-device application to Apple products was reliably close to $0. Then Apple would be (even more) flooded with very low priced apps from developers just catching-as-catch can any trickle of revenue they can seize. The rate of Apple's return on expenditures for the app store would plummet.

So then what can we glean from this about Apple's insistence on non-standard programming environments and programming languages?

Apple's idiosyncratic platforms raise the cost of developing for its platforms somewhat artificially. Apple's charges for developer kits, non-standard APIs, and non-standard languages all raise the cost barrier to entry for new apps.

Consequently, Apple's approach has the net effect of raising the average price of apps in its app store.

As we've seen: up to a point, raising the average price of apps can boost Apple's profit even if that means holding back the app market from its full potential size.

Rounding errors

The cost to Apple for distributing an app is effectively a rounding error. Likewise, for businesses, the cost of the SDK / developer program is also a rounding error.

Occam's razor applies here: Companies want to sell their apps on iOS because that's where the most money is to be made. Developers dislike Objective-C, for well understood reasons. Apple themselves write a large amount of software in Objective-C. Google and Microsoft have more productive languages to work with. Apple wants added productivity for themselves and their developers. No existing language meets their needs for platform interop and performance.

Designing a new language was a no brainer on those grounds alone. Frankly, I'm surprised that it took this long, although there's evidence that this has been the plan for years (adding blocks to ObjC runtime, etc).

dispute: "The cost to Apple for distributing an app ..."

I dispute this claim:

The cost to Apple for distributing an app is effectively a rounding error.

The global payments system imposes non-trivial transaction costs on app distribution.

Likewise, for businesses, the cost of the SDK / developer program is also a rounding error.

What do you think the average "company valuation" is of entities selling apps? I think it is quite low. Eliminating developer fees would lower that average valuation even further. The cost of developer fees (and other Apple-specific costs) is expensive to many app developers.

I also question this claim:

Apple wants added productivity for themselves and their developers.

Really? Is Apple experiencing some app supply problem I haven't heard about? Every indication I know says they are limited by the demand side.

Further, productivity for developers does nothing for Apple's profit but harm it. Again, because Apple is the retailer here, not the producer of apps.

In this conclusion:

Designing a new language was a no brainer on those grounds alone.

You have had to assume that Apple is acting against its own interests to reach this supposed "no brainer".

p.s.: Back in the 1980s, competing personal computer platforms really did "want higher productivity for their developers". Back then there was a supply problem with applications and, back then, the platform vendors were not monopoly app retailers.

I still think your analysis

I still think your analysis is way off. The cost for distributing apps looks to me to be a second order effect. The first order effects are total revenue per iDevice owner and number of iDevices sold. Driving the cost of apps down helps with the latter, for sure. I'm not sure what the optimum price/app would be for the former, but I expect that's a more important consideration than overhead on app sales.

Note: it would be very easy for Apple to add a fixed cost per app (e.g. 10 cents + 30% instead of just 30%). But the main thing they could do to raise average app price is to change their discoverability formulas. Right now the emphasis is on popularity defined by number of sales. The cheapest apps make it to the top of those lists. The fact that they aren't trying to fix either of these things says to me that your analysis is way off.

The language is there to help boost productivity which helps create lock in. It's just about making iPhone apps easier to build than Android apps.

app sale transaction costs

You are mistake about transaction costs.

A random article on ZDNET offers a breakdown of app sale transaction costs as estimated by Piper Jaffray.

Ca. 2010 or 2011 the average app sold for $1.44.

A sale of $1.55 broke down to:

$1.09 to the devloper.
$0.23 financial transaction fees.
$0.02 internal cost of sales ("processing")
$0.21 Apple's margin

Thus, you can see that financial transaction costs are not only significant but they are a 25% mark-up over wholesale and comprise half of the total mark-up over wholesale. (Note that as a percentage of a sale, financial transaction fees are smaller for higher sales prices.)

http://www.zdnet.com/blog/btl/apples-app-store-economics-average-paid-app-sale-price-goes-for-1-44/52154

And again, about this:

The language is there to help boost productivity which helps create lock in. It's just about making iPhone apps easier to build than Android apps.

What problem that Apple actually has in the real world would be solved by this "productivity boost" and "lock in"?

Thanks for the break down on

Thanks for the break down on app price. That 23 cents is considerably higher than I was imagining.

What problem that Apple actually has in the real world would be solved by this "productivity boost" and "lock in"?

The main problem they have is maintaining market share vs. their main competitor, Android. If they can make it easier for developers to put out high quality apps on their platform, that will be a meaningful advantage. Even if it's just the difference between developers targeting iPhone first and letting Android be a port some number of months, that's huge.

re thanks and "vs. android"

You're welcome and thanks for the stimulating conversation. That said, sorry but...

I dispute this (specifically the part to which I added emphasis):

The main problem they have is maintaining market share vs. their main competitor, Android. If they can make it easier for developers to put out high quality apps on their platform, that will be a meaningful advantage. Even if it's just the difference between developers targeting iPhone first and letting Android be a port some number of months, that's huge.

I don't think comparative availability of apps much drives sales.

I think that probably the qualities that get advertised are what drive sales.

So android device sales are driven by price, large screens, nicer telecom plans, and by Not Being Apple.

And apple device sales are driven because apple products will make you young and pretty, energetic, rich, and distracted from the gritty reality of the near apocalypse in which you lead your wretched life.

Sometimes apple ads show off obscure apps like the current ads featuring a Pixies song and a composer of orchestral music. The suggestion of these ads doesn't seem to be "Only on an Apple do you find such apps." Rather, it seems to be "an iphone will make you a much more interesting person than that loser you are today compared to these attractive people on your tv screen".

Yeah, I don't find that

Yeah, I don't find that point (that I made) particularly convincing either. Things are changing quickly. Five years ago, I think quality of app ecosystem was a big driver of preferring the iPhone, but it's not as much today.

I still think Apple has reason to want their development environment to be attractive. Maybe it's not about competition with Android, but they still increase total revenue by increasing the number of useful apps available. I do not think it's the case that people are all going to buy a fixed number of dollars of apps, and relative app quality just decides which ones, as your original analysis implied. For example, I think there are many people who are buying all of the high quality games available available in certain genres and are willing to buy more. So I think there is some unmet demand. And then you have niche apps and small business apps where they're only going to write them for a single type of tablet.

And I also think you're wrong that Apple wants app prices higher for the simple reason that it would be very easy for them to drive the prices higher, but they aren't. Maybe it just boils down to total revenue (per customer, not per app) being higher at lower app price points. Because even if they did want to raise app price, the idea that they would add deliberate impediments to their app development system to achieve it seems crazy to me. Why not try to capture some of that value instead?

optimizing app prices

it would be very easy for them to drive the prices higher, but they aren't.

My claim is that the optimal price for apps (for the sake of Apple's profit) is higher than what the price would be if it was a standard platform with no "tax" on developers. That is not the same thing as saying Apple wants to make the prices as high as can be.

Because even if they did want to raise app price, the idea that they would add deliberate impediments to their app development system to achieve it seems crazy to me. Why not try to capture some of that value instead?

Up until sometime in the 20th century it was common practice for U.S. farmers to burn crops to prevent deflationary over-production. Today they have more sophisticated arrangements to accomplish the same net effect of keeping food production below where it could be, in order to keep the price high enough to make it worth growing and selling at all.

Commodities are weird like that and "productivity enhancements" make the problem of overproduction worse.

(The problem of overproduction is, incidentally, why certain heterodox economists argue that technological increases in productivity must eventually -- or may even already have -- bring about an end to an economy based on the capitalist exchange of commodities. A symptom, it could be argued, is Apple's need to monopolize app retail.)

I agree

They sell high-end hardware and also need a high-end app store with the most and best applications that run on that hardware.

It looks more like a "we-cannot-sit-still-because-competition-might-suddenly-leap-ahead" strategy language.

If it turns out to be a competitive solution then they'll either port it to other platforms or someone else will port (as in: write another compiler) it for them.

Good. So the point is not

Good. So the point is not developer lock in, more like the opposite: make life easier for developers (though I assume that can also lead to lock in...) in order to sell more hardware. This makes sense. I suppose the C# story could be told this way too.

This means that you need to produce a superior language. Takes some balls to try to do this in this day and age.

re superior language

There is the first problem with this "make life easier for developers" hypothesis:

This means that you need to produce a superior language.

They have very clearly not tried to do so. Swift does not significantly depart from or add to existing comparable languages. They make no claim otherwise.

It's strange watching other people try to analyze the app market as anything but the low-value commodity market it is.

That's the catch, surely:

That's the catch, surely: Why produce a new languages? What's the point?

Superior is the Sweet Spot between the Abstract and the Concrete

I think I could explain that but those who have tried to follow my incoherent musings will understand enough that I'll settle for the title.

There are lots of superiors, or local maxima, in the design space.

I didn't mean to imply

I didn't mean to imply global maxima. Thinking you can achieve local maxima is hubris enough.

Which is why we need an ecosystem

I find it entirely unlikely that anyone can a-priori engineer a language to fit niches, or broad markets, in the programming field.

Darwinism justifiably deals with selection criteria the best.

Why produce a new language

Why produce a new languages? What's the point?

Me, personally? The one I'm working on I'm working on because I have found a programming style that I quite like but that fits awkwardly at best into any algol-type or functional language.

Apple? Another message in this thread linked to the language inventor describing Swift as initially a skunkworks project and then a project of a larger developer tools group.

AFAICT a developer tools group at Apple serves two masters:

1. The people who manage the business as a buyer and seller of commodities and who have a deep understanding of that perspective. These people are concerned with issues like the spectrum of prices of apps sold; the spectrum of prices offered in the app store; and so forth.

2. The marketing people who manage a "brand" that is presented to developers. These people are concerned with maintaining excitement, the appearance of Progress, etc.

So I think Swift started with a guy who was playing around for personal reasons. Swift is consistent with the commodity strategies. Swift is OK as a marketing tool.

Kind of a boring explanation but it's kind of a boring language, if you ask me. :-)

In Apple's case...

It means some political faction within Apple has found it was advantageous (to themselves) to create Swift. Just as there was (maybe still is) a political faction that's kept Objective C alive all this time (it has no redeeming qualities to anybody else) as they built their careers upon it. This is how it works in big software corps. Internal organizational self-interest. This may or may not correspond to larger company goals, customer desires, developer productivity, etc. As long as the parties involved manage to hang on to their piece of the internal pie, this is what you'll get. Often this appears from the outside as entrenched idiocy, but from the inside from an internal power-struggle point of view it makes perfect sense.

superior to what?

It's clearly superior than Objective-C and JavaScript, at least on par with Java, and in the same ballpark as C#.

They very clearly tried to create a language that plays nice with the ObjC runtime and wasn't too alien for their existing developers. Seems like they nailed those goals, to me.

Can you explain why playing

Can you explain why playing nice the the ObjC runtime requires a new language rather than an implementation of an existing one? I know why this can be the case; I am after specifics.

Compare JVM CL ports to Clojure

Ignoring the unique parts of Clojure, the hosted design is fundamental to the language. You can't implement Common Lisp or Scheme without concerning yourself with the representation of primitives such as strings or numerics. A language may depend on a standard library that depends on features (such as continuations) that the platform simply can't provide.

Objective-C's runtime is sufficiently different from most language runtimes to justify a new language. And yes, Apple wants to control it. No nefarious purpose necessary: They want to be able to evolve it at their own pace and without undue influence from standards bodies or existing stakeholders.

C# and OLE/COM

Isn't this the same reason for C#. In C/C++ the OLE/COM bindings for Windows is complex boilerplate. The original motivation for C# was to hide the boilerplate. This is all related to runtime dynamic object binding. For each language on the left, it hides the boilerplate needed in C/C++ to interact with the dynamic-object runtime-plugin system.

C# = OLE/COM
Java = Beans
Objective-c = Cocoa/Objective-c
Swift = Cocoa/Objective-c
JavaScript = XUL/XPCOM (using mozilla's framework).

I can't think of any language that embeds CORBA.

I'm pretty sure that playing

I'm pretty sure that playing nice with the the ObjC runtime DOESN'T require a new language. There are many languages that work reasonably well (I'm told) with the ObjC runtime. Particularly notable is Pragmatic Smalltalk (part of EtoileOS), which links a prototype-based Smalltalk to GNUStep. The reason I mention this example is that it's being written by a group that seems rather perfectionist, in the good sense - they wouldn't be doing the project if it wasn't working well. (It's very incomplete, and seems to be more or less stagnant at the moment, but I think that's because they're hobbyists with other jobs not because of any technical problem.)

Requires a dynamic language

Playing nice with ObjC runtime (without masses of boilerplate code) requires a dynamic language with a specific runtime for that platforms dynamic object system. This mainly seems to be scripting languages like JavaScript and Python.

Presumably Apple wanted a 'C' like language of the Java/C# type, that played well with ObjC runtime. They could have used Java, but presumably did not like being controlled by Oracle's license, or C# but felt the same way about Microsoft.

Swift doesn't have to be better...

...than Java or C# or JS or the Google language of the month, or C++ for that matter.

It just has to be better than Objective-C, not a high hurdle to leap.

Needs to beat C/C++ somewhere though

I've been playing a silly nice game called Arcane Legends from time to time. Runs seamlessly on iOs, Android, PC and Mac and even in your (Google Chrome) browser. Apart from the game, that's a technological marvel.

I have tried to find out how they achieved that but the Internet is low on technical detail. So guesstimating I assume they have a low level OpenGL ES API with C++ on top of that with thin bindings to the several platforms.

Apple might manage to lock-in lots of iOs programmers but the smarter ones will go down the C++/OpenGL route and they will simply make more money.

Apple can go two routes: make it an open language such that more app developers make a living or make it an iOs locked tool. Since they're a hardware company primarily I guess they'll choose the latter.

It's interesting

Of the three mobile platforms, the one with the crappiest tools (that would be Apple) is generally regarded to produce the best product and best user experience--and the one with the best developer tools (arguably Microsoft) produces phones that nobody wants to buy.

(Android is somewhere in the middle in both counts--I'd rather use Java than ObjC, but C# is a nicer apps language than Java is).

Obviously, much of the above is opinion and speculation--but it is interesting food for thought.

Cross platform mobile

I have worked on cross platform mobile projects that did exactly that. I wrote the Open-GL ES and Open-GL rendering engines. As Android and iOS are both *nix based network and file IO is fairly easy from the low level too. We wrote a platform specific library to wrap stuff that was different, but all graphics was in Open-GL, and everything else cross platform C++. Worked on desktop windows/Linux/Mac too.

OpenGL

Didn't see this comment. Apple has started to place a library onto of OpenGL with GLKit. Right now GLKit just abstracts OpenGL and tweaks it towards Apple standards. I wouldn't be shocked if the next step is reversing this so that the drivers natively support much less in hardware which works towards GLKit and the full OpenGL renders using GLKit.

A Mistake

Since Jobs had gone, apple really seem to be losing their direction. Instead of focusing on great product and usability, they seem to be shifting to locking developers into their platform. This will be bad for users. Look at how well it worked for Microsoft. They may do great for a while, they may even dominate the phone platform, like Microsoft dominated the desktop and IBM the mainframe before them, but eventually the next innovative wave of technology will come along, and they will be so busy defending their fortress they will completely miss it until its too late.

Open source is our only hope. Actually open standards are enough really. Mandate open standards for all public contracts, I think this view is gaining some traction in europe at least.

Microsoft

Look at how well it worked for Microsoft.

Over an entire generation they dominated the desktop environment. They have a desktop userbase about 6x the size everyone else combined. They are the largest software company in the world with growing profits. It worked out extremely well for Microsoft regardless of anything that happens next.

but eventually the next innovative wave of technology will come along, and they will be so busy defending their fortress they will completely miss it until its too late.

OK. So what? At 7% a dollar earned 40 years from now is worth $.06 today, at 12% it is worth a $.01 today. Moreover being open doesn't assure success.

Mandate open standards for all public contracts, I think this view is gaining some traction in europe at least.

Companies like Apple and Microsoft can very easily meet formal criteria for open standards while not being in no true meaningful sense open. Swift is going to be a great example of this, Swift is going to be an open standard just one that is for all practical purposes closed.

Governments can of course write the standards themselves or appoint quasi-governmental institutions to do so. But then they don't get broad industry buy-in. Standards without much corporate involvement is often very popular with public interests groups and governments everywhere on many topics. In practice it doesn't work well. Government contracting is about satisfying many conflicting interests. Governments can easily cultivate specialized application vendors who satisfy narrow public niches. But they aren't tied to the commercial vendors those vendors often end up being high cost low quality alternatives whose bids are mostly controlled by their political connections. That is to say you get Open Standards but they travel hand in hand with government services that are more expensive to administer than their private alternatives and massive corruption of public figures. That leads to a privatization push.

Governments reflect the society they operate in. Unless there is broad societal support for Open Standards governments aren't likely to want to push hard enough to create them. Apple is successful because people want the advantages of tight vertical integration and don't particularly care about open standards. I get that you disagree but that is irrelevant you are asking the government to act against the public interest in circumventing the choices that consumers are with understanding making.

I have tremendous doubts that Europe will be successful in pushing Apple towards open standards anymore than they have been with Microsoft.

Disappointing

Its disappointing, but you may well be right. On the other hand, Google's attempt to commoditise the phone platform, where you can build your own phone from components may be successful. I can see phone shops liking the ability to differentiate for their customers by building phones with different custom specs, which will of course then run Android. In a way Google is positioning itself to be the Microsoft of the mobile phone (supplying software but not owning the platform). Apple is repeating the IBM play of trying to maintain tight control over everything. Even Apple's attempt to get people to buy into the 'Apple' culture to develop on the platform is reminiscent of the Priesthood from the mainframe days. After all Microsoft outsold Apple on the desktop despite Apple's tight vertical integration - so people can't value it that much, but there are probably too many factors at play to draw meaningful conclusions.

Disappointing

Art is going to be a fine platform for low service applications but interesting applications on Android are going to be tightly tied to an entire ecosystem of webservices (Google Play) with Google sitting in the center. Google isn't in the OS or hardware business. They can't push through changes to their operating system so they are creating a two tier operating system. The bottom tier is a hardware diverse operating that is likely going to lag aimed more and more at emerging markets and a top tier service operating system vertically integrated with their webservices. So that's not going to help your problem any. With Android, in a developed market, you are going to have the same sort of thing as Swift. Instead of having to integrate with Cocoa APIs you will have to integrate with Google's APIs.

After all Microsoft outsold Apple on the desktop despite Apple's tight vertical integration - so people can't value it that much

I'm not sure you can look at the market that way. At the price points Apple's laptops compete in (essentially $1k plus) Apple has 85-91% marketshare. There is just a huge market for lower priced computers which Apple doesn't sell at all. The average Apple laptop sells for 2.7x what the average Windows computer sells for. JavaVM handsets are going to outsell iOS handsets this year, that doesn't mean that people excluding price prefer JavaVM to iOS.

As for the original dominance it happened almost immediately. By 1986 IBM/Microsoft had over 80% share. Apple in its history has briefly broken 10% share. 1986 is a good year for discussion because '87 is when IBM releases MCA and Microsoft starts pushing the Microsoft/Intel/Western Digital standard, making x86 not IBM the platform. In 1986 there isn't really any difference in integration between Apple and IBM. From 1987 forward there is. So certainly Apple lost but they didn't lose as a result of open vs. closed.

Mostly Makes Sense

That mostly makes sense. However it could go a different way. Developers will develop Android apps because it is the phone with the largest market share. This could eventually dominate, and push into the higher end. Betamax lost to VHS despite a superior product because VHS had better availability of products that viewers wanted to watch. Time and time again the mass market offers such economies of scale that they out develop the high end players. For example expensive low volume CPUs used to dominate scientific computing, but the momentum behind x86 drove all the speciality chips out of business. Unix machines now use x86 instead of Alpha, Mips, PA-Risc etc.

Android long term

No question that's a possible threat. Of course Apple has been mostly successfully dealing with cheaper, lower quality, more heavily purchased products for over a generation. Androids spread could be much worse than simply phones. As Google moves down market into all sorts of embedded devices ( http://en.wikipedia.org/wiki/Internet_of_Things ) their marketshare over the next 20 years might not be 5-6x Apple's but more like 50x times Apple's share.

But in the end a few hundred million of the most financially influential people in the world can easily keep a product afloat. It certainly could be an interesting race as to whether Android phones can get cheap enough that quality doesn't matter faster than iOS's advantages and dominance at high price points allow it to establish an entrenched deep ecosystem.

But regardless I don't see how your betting on Google to win the race matters for the discussion about open standards. Apple's objective is to establish that entrenchment not to fight it. Apple has no interest in fighting it out with Asian manufacturers to see who can put generic parts in a box for less money.

Developer Lockin is the focus

I think what you are seeing is the marketing group seeing what is happening in other environments (such as Android and Google language development) and saying that we need to be on the same band wagon.

I don't know how much experience you may have had with marketing groups but all my experience with them is that they think quite differently to normal folk and have no problem with jumping on someone else's band wagon if they think it will give them a boost in the market. They like to keep what is theirs and get more of what belongs to the competition.

Technical superiority is not really their forte, whereas smoke and mirrors are. As one director of a company said to me years ago, it's not about having a superior product, it's about appearing to have a superior product (however that superiority is expressed).

Once you have a customer locked in, it can be quite difficult for them to change to someone else. The customers here being the developers.

If this move is successful for them in gleaning a high developer conversion, when someone then tries to migrate the language to a new environment not controlled by Apple, I would then expect to see the legal teams come out to play.

Apple is not just about hardware, it is about an entire environment, hardware, software and wetware.

irrelevant amounts of money

At Apple's volume, their per transaction costs are significantly lower than you or I could get from some middleman transaction processor. Stripe offers 2.9% + 30 cents per transaction. Meanwhile, Apple takes 30 cents per $1 iTunes song sale. Surely they aren't making zero profit per song...

To a student or one-man shop, a $100 developer fee may be prohibitive, but their apps are effectively irrelevant. If you're not making enough money to recoup the $100 fee, then you're not a business worth caring about. This was true when I was watched the Xbox Indie Games store from inside Microsoft. The purpose of the $100 fee was primarily a spam filter, not a significant source of revenue.

http://www.apple.com/about/job-creation/ shows that they have 275,000 registered iOS developers. Apple generates over $1B in revenue per week, so they earn that much money in about 6 hours.

So still I content that, to Apple, both per-transaction costs and developer fees are as close to rounding errors as you can get.

Now, for developers. An unscientific "I'm feeling lucky" search returns http://www.indeed.com/salary/q-Ios-Developer-l-New-York,-NY.html which shows $130k/yr salary for iOS developers in New York. Therefore, the cost of the developer fee is dwarfed by two man-hours. To say nothing of additional engineers, designers, product management, marketing, etc. I won't mention the cost of hardware, since every one of these developers is going to have a company issued MacBook Pro and an iPhone whether they are writing iOS apps or server-side Linux software.

Is Apple experiencing some app supply problem I haven't heard about?

Straw man. I know for a fact that critical apps, such as Facebook's, require more time and money to develop for iOS than for Android. If no other apps existed at all, Apple would prefer Facebook have new functionality be best and first on iOS.

"make it up in volume"

You sound like the old joke about the car salesman who says he loses money on every sale but he makes it up in volume.

At Apple's volume, their per transaction costs are significantly lower than you or I could get from some middleman transaction processor.

That's somewhat true. Did I say otherwise somewhere? Show me where. Even so, payment processors have no reason to be generous to Apple. They will take the largest share they can take.

Meanwhile, Apple takes 30 cents per $1 iTunes song sale. Surely they aren't making zero profit per song...

Did you look at the figures? The bet is Apple has a gross profit of about $0.15 out of those 30 cents (perhaps less).

You have made my point exactly here when you say:

To a student or one-man shop, a $100 developer fee may be prohibitive, but their apps are effectively irrelevant. If you're not making enough money to recoup the $100 fee, then you're not a business worth caring about.

And that is one of the main functions of the $100 fee: to discourage you from flooding them with low price apps to just catch-profit-as-catch-can.

The purpose of the $100 fee was primarily a spam filter, not a significant source of revenue.

Please show me where I claimed that developer fees were a significant source of revenue.

So still I content that, to Apple, both per-transaction costs and developer fees are as close to rounding errors as you can get.

Yes, but you've only gotten there by ignoring or misunderstanding the reasonably well sourced financial data before your eyes.

App store revenue is almost irrelevant

Your analysis is interesting; unfortunately, I think it is ultimately irrelevant.

People buy iPhones and iPads for the apps, and most of Apple's profit comes from iPhones and iPads. Using numbers for 2013, it looks like Apple had profits of ~$37B, App store revenue of ~$10B, and App store profit share (at 30%) of ~$3B.

As the app store is important for iPhone/iPad sales, I'm sure that Apple would run the App store at a loss if they had to. I do not think they'll manipulate what apps are available against what they think is overall good for the user (including branding) just to receive a small amount of extra profit from apps sales. They may be working to try to ensure more expensive apps to make sure that developers have time to make good apps (and to try to force differentiation); but not just for their own profit.

app store profit margin is vital

According to you (and I believe you), 8% of Apple's total profit comes from the app store. That's not even close to "irrelevant" to shareholders.

And look here. Again, according to the figures you gave:

Apple's app store profit margin is 30% on sales of $10B.

Let's suppose that Apple found a way to raise its profit margin on apps to 34% instead of 30%.

That would be $400M in extra profit for the company. The company's overall profit would rise by 1%.

Got that? Profit from apps are about 8% of the business. If the profit margin on apps rises are falls by X%, the company's overall profit rises or falls by about (X/3)%. This is a big deal.

Somewhere in Apple there are commodities experts who are sweating the pennies on every app sale and who are really determined to try to hit the sweet spot on the price v. demand curve consistent with the constraint of not alienating developers too badly.

I agree there are analysts

The reason I said it is irrelevant is that you're talking about a 1% increase in profit - for something that present a big risk to the 99%.

The App Store is a big selling point. Increasing the quality of the apps there is important to Apple; it drives the exclusive image of the iPhone/iPad, which again sells to customers.

If they can grab 1% more profit without risking that, I'm sure they'd do it - $400M is nothing to sneeze at. And I'm sure there are analysts looking for these opportunities.

However, Apple will not grab 1% more profit there if it place any substantial risk to their core business; and my argument is that anything that makes the end user experience substantially worse is a substantial risk to the core business, and anything that alienates developers (except to benefit end users) is a substantial risk to core business.

"anything that alienates developers"

¿like all the cockamaimy hoops one has to jump through with certificates and profiles.? ha ha oh well. (vs., say, android.)

Wow is most of this wrong

First off most of this is wrong. Start with point (4). Apple's goal is to sell Apple hardware. Apple is thrilled by good quality $0 applications because good quality free apps sell phones and tablets. Until fairly recently Apple's App store was run at a loss. The cost of vetting the applications and managing the developers exceeded the 30% of the total revenues from application sales.

Now over the long haul Apple is well aware that their 30% commission on software might become the real profit center for phones. Going on a price war, dropping the price of their phones and sending their software up by a factor of say 4x is a card they have in their deck but they haven't played it yet.

Apple's charges for developer kits, non-standard APIs, and non-standard languages all raise the cost barrier to entry for new apps.

Apple charges for developer kits well below what it costs them to support developers. $99 is a token amount but a token amount large enough that people don't want to lose their developer ID for misconduct. Its enough for them to do the basic checks to do a small identify verification. That's it. It certainly is not a revenue source. There are 6 million Apple developers in total, most of whom are using the free tools. If we assume 1/3rd are paying $99 (a very high estimate BTW) that would be .1% of revenues, about 10% of iPods which Apple even considers a dying product.

Non-stanard APIs OTOH are why people buy Apple products. Those APIs tie into the features of their OS and their hardware that is the whole reason their products exist. They don't want standard APIs because they oppose the lowest common denominator approach that standard APIs force. If Apple users were forced to use software that worked well on crappy hardware and OSes the entire Apple experience would be gone. The reason people pay 2.7x as much for the average Apple laptop as the average Windows laptop is because they enjoy the experience.

I'm sitting here enjoying the wonderful fonts at 220 PPI. Its an enjoyable experience because every single piece of software I run is tested at that resolution to render properly. My battery lasts an extra 20-30 minutes because almost all the applications I'm running are designed to function well with coalescing. Whenever I have to run any piece of software that wasn't written by developers enmeshed in Apple culture I'm immediately reminded of the hundreds of wonderful APIs that Apple makes available as what are normally (for Apple users) automated simple operations that happen in a completely instinctual way become complex operations that require focus, "seriously Windows doesn't have that built in by now we've had that for 10 years".

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of platform enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms. -- Steve Jobs

As for non-standard languages... non-standard languages keep developers who aren't going to grok Apple culture off the platform. The goal isn't to raise costs, but to drive commitment. Behavior changes belief. Apple wants there to exist "Apple developers" and for end users to want software made by "Apple developers".

This isn't about cost in the sense above.

Hurts Enterprise Products

This is hurting application development. Companies with significant investment in the Apple experience have enterprise apps that specifically target iOS devices. Enterprise customers say they don't want to be tied in to Apple, and have problems with the hardware cost. If the enterprise customer buys into the Apple brand experience it can work, but in general enterprises seem less influenced by Apple brand values than individuals. This leaves developers having to develop a second version of their apps. It would be so much better if developers could have a single source for all platforms that could use native features with an API to detect which features were available on any given platform. In the end developers are more interested in getting their app experience, which is their competitive advantage, consistent across platforms.

Enterprise products

@Keenan

This is hurting application development. Companies with significant investment in the Apple experience have enterprise apps that specifically target iOS devices. Enterprise customers say they don't want to be tied in to Apple, and have problems with the hardware cost.

Well first off I would comment that Apple is not an enterprise vendor, they are semi-hostile to the goals of enterprise IT. Arguably the entire BYOD movement which is what drove iPhone adoption in the enterprise was a reaction against the poor user experience that was a direct result of the policies that best fit enterprise IT's goals. The interests of the employees and the interests of the employers often conflict. Employers switched away from BlackBerry because employees were willing to keep the iOS "work phones" with them at all times and were often willing to pay for their own work phones and data plans. This shift created a value far exceeding the higher hardware costs (especially if the employer wasn't paying for the hardware). Apple, unlike the choice Microsoft made in the 1990s, is mostly siding with the employees over the employers in how they target their platform. But the effects of that choice is often quite a bit more beneficial to employers than having their immediate wants met.

So when you say "enterprises want X" I think you need to be a bit more nuanced. The enterprise has conflicting and complex interests.

Secondly and more importantly I don't see any evidence at all that Apple's strategy is hurting enterprise applications development. Enterprise applications development on mobile is growing rapidly faster than the growth in handsets and as handset growth is leveling off we are seeing no leveling off in enterprise applications development spending.

If the enterprise customer buys into the Apple brand experience it can work, but in general enterprises seem less influenced by Apple brand values than individuals. This leaves developers having to develop a second version of their apps.

Well yes. And ultimately as Apple and Android pull further away from one another that cost and complexity is likely to increase. Which is going to lead to less porting, and thus further fragmentation of the user bases.

It would be so much better if developers could have a single source for all platforms that could use native features with an API to detect which features were available on any given platform. In the end developers are more interested in getting their app experience, which is their competitive advantage, consistent across platforms.

That's not true at all. Some app developers are interested in consistency, a lowest common denominator approach. Others are much more interested in best of breed. Microsoft's enterprise software offering most certainly take advantage of unique advantages of the Microsoft OS echosystem to lower administrative complexity and thereby TCO. iOS enterprise software offerings most certainly take advantage of the limited range of hardware and OSes to lower testing complexity and thus lower development costs for applications. In both cases the enterprise software benefits from inconsistency across platforms.

What you are asking for are cross platform mobile toolkits. HTML5 / phonegap exist. Enterprise software vendors who want consistency across platforms can have it. There reason they are writing in Cocoa though is they want performance more than they want consistency.

Doublethink

You equate too many things that are unconnected. For example "Some app developers are interested in consistency" and "a lowest common denominator approach" Implying that any interest in consistency is somehow inferior to a platform specific approach.

Common code can select platform features based on the available services on a platform. If the device has a 12MP camera, the photos captured are not going to be downgraded to 8MP. If you want a button the user can press, the button can be rendered by the OS on the device as it likes.

You say you see no evidence of this, yet here I am providing you with evidence and you are not listening. Its very easy not to see things with your eyes closed. Effectively your argument presupposes the outcome the vendors desire as if it were the only way things could be, ignoring the alternatives the vendors do not want.

Perhaps you can persuade some enterprise clients that they really should buy Apple hardware for all their employees so that they can run wonderful software that takes full advantage of the iOS experience? If you can't then the above opinion is just wishful thinking disconnected from the reality on the ground. Too many enterprises seem locked into Microsoft, and to exchange one vendor lock-in for another seems short-sighted (although I understand Apple's reasons for trying).

I might have more sympathy for your argument if iOS devices were clearly superior to the alternatives, but the WiFi network connectivity on iOS devices is seen as inferior in many enterprises due to a series of unfixable problems, which hurts the introduction of iOS devices, and gives corporate IT a point at which they can dig their heals in to oppose introduction. This brings us back to the corp IT people not wanting to be tied to a single hardware-platform (although they don't seem to mind being tied into a single software vendor like Microsoft).

Finally none of this is speculation, I work in this area,and have been out in the field actually talking to people and listening.

Doublethink

@Keenan

Finally none of this is speculation, I work in this area,and have been out in the field actually talking to people and listening.

I go to many of the major mobility conventions I don't see Fry-IT or you anywhere. Which Telcos know who you are? What Colos are buying from you or even know you? I'm meeting with Sprint at their NY headquarters on Wednesday about the new access points from Allied along Miami to Jacksonville, who should I ask for on your behalf? So don't try the "I'm in the industry and you are not". That may work with the academics it won't work with me. Because I'm really in the business and I know the others in the business. Looking at your site you are a bit player in health (socialist in the UK) and education.

And noI don't listen to what mid level executives say. I listen to what they are willing to pay for. Enterprise IT is lazy like everyone else and would like to do as little work as possibly supporting their user base. But that's not the whole enterprise. That unresponsive style of software construction they favor is why BYOD came in, and that's why they are getting pushed out by the business groups. The business groups are the enterprise and they don't like least common denominator.

Now if I were going to listen to opinions I'd be looking at the good quality mobility research that come from major research firms and business industry groups not the executives I happen to shoot the breeze with one day. I'm on the executive council for Nemertes and NJTC those are so-so on this (and still don't agree with you) but far better than just some guy's opinion. IBM, Infoweek, tend to be excellent. Michael Saylor of Microstrategy who writes many of the most expensive enterprise mobile applications with or for internal IT groups doesn't agree with you. And finally the FCC under Obama does a really good job collecting representative information in the public interests. And so if I gotta listen to someone I'm listening to them not some under ten man shop out of the UK.

So moving on from your attempt to take a superior tone.

You say you see no evidence of this, yet here I am providing you with evidence and you are not listening.

You haven't provided evidence. You have provided your opinion not backed by anything other than your claims to what people want. You want to provide evidence go ahead.

Common code can select platform features based on the available services on a platform. If the device has a 12MP camera, the photos captured are not going to be downgraded to 8MP. If you want a button the user can press, the button can be rendered by the OS on the device as it likes.

And that's a perfect of least common denominator thinking that all cameras are more or less the same except for the pixel count. And that's not the case. The physical camera on the HTC-One-M8 doesn't handle bright well so it is going to wash out colors. On the other hand the HTC-One-M8 is going to produce much better depth of field than the iPhone 5S. Or if it is Galaxy 5S I can do better in low light conditions than either of the other two. If you actually care about doing a good job, depending on the problem domain you have to compensate for the actual specific physical issues with each and every single choice that each and every single manufacturer made when they built their phone. You can either do that by writing good camera software or using a platform / device specific photo library. But in no case is least-common-denominator going to cut it. That if it is going to do much of anything is just going to produce bad photos and from there other problems.

Perhaps you can persuade some enterprise clients that they really should buy Apple hardware for all their employees so that they can run wonderful software that takes full advantage of the iOS experience?

Enterprise clients realize huge saving from their employees being willing to buy their own Apple hardware to run the business applications on. And moreover because it is their own hardware be more careful and deal with the higher replacement / repair costs. That's where the saving is. That's been the whole idea of BYOD. For those enterprises that do have to buy the phone, the average enterprise employee cellular plan (minutes and data) in 2013 was $1044 / yr. The difference in price between Android and Apple is month or so. It is not a huge ticket item on the fully loaded cost of an employee.

This brings us back to the corp IT people not wanting to be tied to a single hardware-platform

Of course all things being equal enterprise IT would rather not be tied to a single hardware platform. They would much rather be able to nickel and dime the hardware vendors. So what? Apple wants to tie them to a hardware platform. Why is it in Apple's interest to give them what they want? That's how this started. Then you switched to how the government should regulate to do this because there is some great societal interest which you can't really name.

As for Wifi stuff. I'm going to drop this. I'd like to find any CEO who says that his company can't implement a 8 - 9 figure change in his HR cost structure because his IT group doesn't want to fix their wifi for whatever issue you are talking about.

Shoot the messenger

You could at least get my name right :-)

Here I am offering genuine reports from the ground, probably similar to many other SMEs (remember about 50% of GDP comes from SMEs), and the response instead of addressing the problem is to try and attack the credibility of the reporter. I am posting my personal views, not that of the company, so I don't feel comfortable discussing clients here.

Your comments about the camera are completely backwards. I can provide an abstract interface to the camera, and the photos captured will vary in quality independently of my software. In this way each platform can differentiate on camera quality, and their user-interface for photography. All I care about is getting the captured image as a jpeg file. Android's Intents are great for this. I just say "get me a photo", the vendor specific software supplied on the device by the manufacturer takes over (offering the user whatever superior experience) and returns me a string pointing to the captured image file (and if the hardware or software is superior, so will the quality of image in the file).

Here's some comments about the wifi: http://www.net.princeton.edu/announcements/ipad-iphoneos32-stops-renewing-lease-keeps-using-IP-address.html
http://www.net.princeton.edu/apple-ios/ios40-requests-DHCP-too-often.html
http://www.net.princeton.edu/apple-ios/ios41-allows-lease-to-expire-keeps-using-IP-address.html
Note: the last one affects up to 6.1 (and maybe newer but they stopped testing with 6.1)

I think you are reading this slightly wrong. Ideally I think languages should be independent of vendors and thats a separate argument (that post was not in response to this one). On the whole I like Apple, and have lots of Apple kit. I am invested in Apple and want this to succeed, and would like nothing more than for enterprises (and I use that generically including both public and private sector) to go for Apple enterprise wide. It would help me a lot, and help sell enterprise products dependent on Apple devices. My real-world problem is the resistance to this, not with this as a concept (as opposed to any philosophical problem with vendor languages).

So are you offering to help, or is it all just talk? I will happily email you details confidentially if you can.

Shoot the messenger

@Keean

and the response instead of addressing the problem is to try and attack the credibility of the reporter.

The reporter was making some claims you are not backing down from. You were introducing your expertise as the key point of evidence. I'm happy to take this away from that realm but if you want to keep claiming that you are an expert based on your position then your position matters.

Your comments about the camera are completely backwards. I can provide an abstract interface to the camera, and the photos captured will vary in quality independently of my software. In this way each platform can differentiate on camera quality, and their user-interface for photography. All I care about is getting the captured image as a jpeg file.

Then if you just want lowest common denominator you don't have a problem. Something like:
<input type="file" accept="image/*;capture=camera"> and you can use HTML5. What you are asking for already exists and all the vendors support it.

My real-world problem is the resistance to this, not with this as a concept (as opposed to any philosophical problem with vendor languages). So are you offering to help, or is it all just talk? I will happily email you details confidentially if you can.

If you are talking about 250+ registered devices trying to decide on a mobile strategy and helping them put together a vendor plan. I do offer that service. If the enterprise is fine with an agency relationship the carriers, SaaS vendors... will pay my guys so I can do it free for the enterprise. And as just one of the many benefits of signing with an agent I'll offer to throw in ripping out their whatever-doesn't-work-wifi and replacing it with a new Cisco system (certified and Cisco maintenance for the life of the contract) which is tested for iOS at no charge as part of the agency agreement.

So if there are enterprises which don't know what they want, sure I'm happy to help. But so will hundreds of other vendors who would offer similar packages. So if this is really the holdup, sure I'm happy to do business. But I can't imagine that this was really the holdup. Because I'm sure their current agent (if they have one) would say the same thing.

Credibility

Regarding credibility, I am using my real name, and you could if you wanted google my linkedin profile, or look at my companies website, see the client list, which includes mobile operators like Orange/EE, Vodafone and O2, see:

http://www.fry-it.com/case-studies/orange-photo-sharing-platform

http://www.fry-it.com/case-studies/vodafone-mobile-camera-to-postcard-service

I have not made any unjustified claims about me or my business, and my credibility is based on publicly available information. I don't see you providing similar transparency.

I disagree with that approach being lowest-common-denominator. I have repeatedly explained why this is not a lowest-common-denominator approach. Either you are being deliberately misleading, or genuinely don't understand. Giving you the benefit of the doubt, i will explain lowest-common-denominator:

LCD is where you code to the 'lowest' specification of all the platforms you want to support. For example you have two devices one with a 2MP camera, and one with an 8MP camera. In a LCD approach you would have to only use 2MP on both devices (the lowest of the two specifications).

If you take a generic approach (handle images as JPEGs) then you do not care what resolution the camera captures, the image will be the highest resolution the camera can capture. If you take a specialised approach you have different software for each platform that takes advantage of unique features, and integrates with the rest of the application via an API. Using these two techniques together you can write software that is not lowest-common-denominator, but takes advantage of the unique features of all supported platforms.

What you are missing is that currently people write two completely separate implementations (say Java for Android, and Objective-C for iOS), so any common code that would be extracted would be an advantage for the developers bottom-line. If the same language could be used on both platforms most of the core business-logic could be shared with just platform specific adapters for each phone. The phones actually share more than you would think underneath, for example both use the Sqlite database, are based on unix derived operating systems, both use the BSD network stack (with the same API) etc. None of this in any way reduces the ability of the developer to exploit the unique features of each platform.

As it happens there are many WiFi companies that don't seem to be able to supply wifi that works with large numbers of people. I have been to conferences where you just cant get a wifi connection when all the delegates come into the exhibition hall. However my point was more that its fine for you to tell me all this, but when a client does not want to be locked into Apple kit and that is the blocking point on a sale, all this fine talk and good intention counts for nothing. I am reporting this not because I have a problem with iOS networking, but because this has been raised by clients as an objection to buying an iOS only application. The best solution would be if everyone wanted to buy iOS devices, then I only have to develop a single solution. Given that that is not happening, then I have to develop multiple versions, in which case I would like to re-use as much code as possible between platforms, whilst still using as many of the unique features of each platform as I can. I need to do this because I know users of each platform do not like to feel like they are getting a second-rate service but want to feel there is perfect integration between their platform and my application. Bearing all this in mind there is still a significant chunk of common stuff that can be cross-platform, and that would be significantly helped by a single language.

So can we agree I have some relevant real world experience? Can we agree on the definition of lowest-common-denominator? Do you have any practical advice to help the bottom line of B2B developers dealing with fragmentation in the mobile market?

@ Keean The examples you

@ Keean

The examples you give prove some connection with the mobility marketing or something like that. Not much to do with mobility other than operators were using them as a gimmick nor are they bought small business nor are enterprise apps at all.... They do justify your claim to having worked with photography but certainly at a lowest common denominator level. You are just pulling a picture, you aren't using any platform specific advantages.

I have not made any unjustified claims about me or my business

You claimed to be knowledgeable about the mobility business because of your ties to it. So that's why I started asking who knows you and why I've never heard of you. Me I'm one the scavengers that runs a business eating the scraps the real players aren't interested in. I keep my people employed but that's it. My level of expertise is listening to the people who have the resources and incentives to do this kind of research properly. Which if we are going to talk is where this should start. Not your buddy in IT who heard about a wifi problem in iOS 3-6 and thus doesn't want to buy Apple.

I disagree with that approach being lowest-common-denominator. I have repeatedly explained why this is not a lowest-common-denominator approach. Either you are being deliberately misleading, or genuinely don't understand.

Or maybe another option is I understand what you are saying and disagree with you since in particular we are talking Apple and Apple uses lowest-common-denominator to mean not taking advantage of Cocoa specific features. Incidentally Samsung uses lowest-common-denominator to mean not only not taking advantage of Android specific features but also Samsung specific features like Knox.

As it happens there are many WiFi companies that don't seem to be able to supply wifi that works with large numbers of people. I have been to conferences where you just cant get a wifi connection when all the delegates come into the exhibition hall.

I haven't met a single vendor who isn't able to supply wifi that can work with large number of people. Apple incidentally is a member of the enterprise wifi alliance as is Samsung. None of the vendors believe their hardware is the problem. Aberdeen which has actually studied large number of public deployments and produced best practices documentation doesn't believe Apple is the problem. Cisco whose deployed to stadiums offering wifi to tens of thousands doesn't believe Apple is the problem.

Now what I have seen are lots of places that do wifi as an afterthought: they don't apply best practices, they don't hire network engineers to build it, they don't keep their systems updated and consequently their systems suck.

So can we agree I have some relevant real world experience?

I agree you have relevant real world experience as a developer of mobile applications. When it comes to having any idea what's in consumers or producers best interests, no we don't agree. You haven't spent the time to realize that your interests and their interests are not the same things. You just assume what's best for you is what's best for them.

Can we agree on the definition of lowest-common-denominator?

Nope. I'm going with the big player's definition. Besides the appeal to authority I happen to think they are right. You start by engineering the application to the culture of your end user base. Those different platforms exist because the end users have different preferences. Applications should not in general be seamless across user communities rather they should be tailored to the needs and desires of each one. Where it is possible to provide a lowest common denominator approach, certainly there is no problem with that.

But the reason developers are having to write more applications versions is because the users benefit from having platforms that fit their needs.

Do you have any practical advice to help the bottom line of B2B developers dealing with fragmentation in the mobile market?

I think I've given it multiple times in this thread already.

Unproductive.

Did you read the other case studies on our website? In any case the question is not my absolute experience but how it compares to yours. You have provided nothing to support your opinion, I have provided plenty. So I leave it up to the reader to decide who has provided the most support for their opinion.

As for my buddy in IT, I don't know where you get that from, I clearly said it was a client, it was their head if IT. I am not going to name who they are, and I don't believe you would reasonably expect me to. I don't know what your aim is here?

I think you have to separate user visible features from developer convenience when talking about lowest common denominator. If we can't even agree something simple like that I don't think there is going to be much common ground in order to continue the discussion. I really don't think the app user cares whether it uses Cocoa or not, as long at the experience conforms to their expectations of the platform.

I don't think my best interest and the clients are the same, so I have obviously failed to express myself well enough. My best interest is to develop solely on apple using apples specific tools and ecosystem. My clients tell me they want support for systems other than apple.

In summary it appears your argument is that you cannot write a cross platform app that is not lowest-common-denominator. You then justify that by defining lowest-common-denominator to be exactly what you need it to be to support your argument. This is a tautology. I think to get any agreement we need to reframe the argument without reference to lowest-common-denominator as we disagree on its definition.

How about: It is possible to write cross-platform applications that offer as good user experience as single-platform applications, if sufficient effort is spent in writing specialised sections of code for each platform, although large amounts of the business-logic may be shared between all platforms. Can you agree to that?

re "wow"

Sorry, but no.

First things first. You say:

First off most of this is wrong. Start with point (4). Apple's goal is to sell Apple hardware. Apple is thrilled by good quality $0 applications because good quality free apps sell phones and tablets. Until fairly recently Apple's App store was run at a loss. The cost of vetting the applications and managing the developers exceeded the 30% of the total revenues from application sales.

It's not so simple. As Apple's HW share declines and as their products move inevitably to lower costs (hence lower margins) the "services" component of the business grows in relative importance. Apple is working hard to accelarate that growth.

Have a glance here for one Morgan Stanley analyst's take: appleinsider.com

Specifically, she noted that the iTunes Store is estimated to have about 15 percent operating margin, while the App Store has much higher estimated margins around 46 percent. She believes that the growth of the App Store and its strong margins could add 10 basis points to Apple's total company-wide operating margins for calendar 2014.

As for this:

Apple charges for developer kits [...] is not a revenue source.

(The elided context makes it clear you mean "profit", not "revenue".)

Can you show me where I said otherwise?

Non-stanard APIs OTOH are why people buy Apple products. [....]The reason people pay 2.7x as much for the average Apple laptop as the average Windows laptop is because they enjoy the experience.

I would guess that the buying decisions are more diverse and complicated than that but it doesn't matter either way to the arguments I've previously made.

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform.

Yes, Apple likes to tell its third party developers that the make-work assigned to them is Very Important to something vaguely called the "user experience". I'm sure there are true believers, too.

In a free market some of those so-called "sub-standard" apps that Apple keeps at bay might deliver greater use value but, surely, they would do less than "standard" apps to further Apple's branding strategy.

Since developers gain nothing by furthering Apple's branding strategy Apple pretty much has to mystify the make-work requirements by promoting brand fetishism.

The goal isn't to raise costs, but to drive commitment.

I guess in your view Apple is in it for love and passion, money be damned.

Wow thread

@Thomas

It's not so simple. As Apple's HW share declines and as their products move inevitably to lower costs (hence lower margins) the "services" component of the business grows in relative importance.

What evidence is there that Apple's HW share declines? As for moving inevitably to lower costs computers are an instructive advantage. In the last decade the relative price difference between an Apple laptop and a Windows laptop has gone from 1.4x to 2.7x. That's a relative increase in costs. On desktops it is even more extreme. It has been 170 days since the MacPro was released. Apple is still running 5 days behind on orders, that is they are still selling every MacPro they can make before they finish making them. And that's for a desktop computer whose lowest price model starts at $3k.

People will pay for quality. The lowest cost producers don't always win. When I look at the road I see cars other than the Kia Rio, Nissan Versa and the Hyundai Accent. I don't know that it is inevitable that Apple will chase lower prices. It is entirely possible that for many years Apple is able to maintain their margins by avoiding commodification.

As for services (I'm assuming you mean online services) Apple traditionally bundles the services in (mostly) with the the cost of the hardware. I can easily see their service costs rising as they offer more free services as a differentiator but that's not going to be a factor in them trying to make money from application sales.

The elided context makes it clear you mean "profit", not "revenue".

I meant both. Assuming extremely high numbers it represents .1% of Apple revenues. And that .1% of Apple's revenues doesn't cover the costs of providing the services associated with those revenues.

Yes, Apple likes to tell its third party developers that the make-work assigned to them is Very Important to something vaguely called the "user experience". I'm sure there are true believers, too.

The true believers are the public who are willing to pay substantially higher costs for those user experiences and consistently express higher levels of satisfaction. Again on computers recent numbers were 12% YoY growth on Mac sales in an environment of 5% contraction and that's for a much more expensive product. The reason for that is high level of user satisfaction. Focusing on a better user experience is not "make work" it is what developers should be doing.

In a free market some of those so-called "sub-standard" apps that Apple keeps at bay might deliver greater use value but, surely, they would do less than "standard" apps to further Apple's branding strategy.

It isn't just branding. Apple doesn't dispute that there are costs especially in the short term. But Apple and Apple's user base understands that there is an element of tragedy of the commons here. A sub-standard applications that takes hold can often still effectively via. incumbency effects or network effects prevent better alternatives from emerging. As a platform complex tradeoffs have to be made when it comes to services undermining the unified total experience of the user on the device via. lowest-common denominator approaches vs. the short term harms caused by not allowing something cross platform to function.

In the case of Google / Android and Microsoft the choice is clearly in the direction of lowest common denominator in all but the most extreme cases. In the case of say Cisco, Avaya or Apple the choice is clearly in the direction of careful regulation evan at the cost of losing access to some lowest common denominator applications. I trust Cisco to act in the best interests of their platform even when that means I don't get to run a particular piece of software. In the case of Apple the success of their approach by almost measure whether it be satisfaction, average number of hours of usage per year, cost per device... the numbers are so extreme and the numbers so clear there isn't any doubt.

Since developers gain nothing by furthering Apple's branding strategy

What developers gain by further Apple's branding strategy is a highly satisfied customer base willing to pay vastly more per user for software. For April 2014

Apple: $US870m software sales, $51.1m per percentage of marketshare
Google play: $US530m, $6.4m per percentage of marketshare

Having a customer base willing to buy stuff is a huge advantage to people selling stuff.

I guess in your view Apple is in it for love and passion, money be damned.

No. I think Apple uses love and passion to make money. They accomplish both.

Not going there.

What evidence is there that Apple's HW share declines?

It's been headline news for a long time.

If it was headline news,

If it was headline news, wouldn't pulling out one citation be easy?

point of order

If it was headline news, wouldn't pulling out one citation be easy?

Yes. That's one of the reasons I'm not interested in giving JeffB more extensive uptake. What is your point?

conspicuous consumption

"Hey Google, you have your languages, well here's ours!" ?

Pissing contest bit too mundane

I you evaluate Go you end up with the observation that it's primarily a well engineered datacentre language. It might even replace C as a language of choice for web servers, or replace bash, but I doubt it.

Even at Apple they will have recognized that. Go is no real competition for them; Javascript makes more sense. But some managers might have the feeling that they should move into the direction of developing their own language extrapolating from Google's strategy. Well, those managers would be right even if they wouldn't understand the argument fully.

Tower of Babel

So every vendor has their own language, and nothing is portable from one platform to another. Any dream of code reuse and a common repository of algorithms is gone... this surely is programming's worst nightmare.

Yet we understand that the semantics of programming is all the same even if the syntax is different. This is where mathematics has it right, although there may be competing notations initially for things like differential calculus, over time the community comes to a consensus and moves towards a standard notation. This is why I think the Haskell way is right.

Edit: I am talking about the process by which the language was created and standardised when saying the Haskell way is right, not the language itself.

Newspeak

Tower of Babel

I find Haskell a dead-on-arrival language. I neither believe in laziness or purity and I certainly don't agree with monadic abstractions to enable programming in the large. Neither do I believe in type classes as a software engineering principle.

That is not to say that I don't like lazy pure programming; I just don't think you can define a GPL on top of it. Neither do I mind research or using it as a teaching tool.

We're repeating arguments. I think Haskell will find a niche like, but bigger than, Prolog. And it'll remain a nice research language.

I am not proposing a Tower of Babel. I am proposing, or rather supporting the current, rich ecosystem.

The semantics of programming is all the same even if the syntax is different

Energy, space, time. We already have a lingua franca which circumvents the Tower of Babel: it's called C. Yah. Haskell could replace C, or C++, at some point if it would throw away laziness and be able to recognize the concrete machine and you would be able to inspect the concrete representation of things and use bit twiddling on that to implement very fast hash functions. Maybe they'll add that, or did that, maybe they'll throw away the monadic abstractions, maybe they'll do other stuff. Since they want a GPL they're constantly forced in the direction of the concrete machine anyway.

Whatever. Everybody is constantly evaluating everything and my bet is that most people come to the same conclusion over time. If I am right, we'll see a constant proliferation of languages over the coming years trying to approximate their local optima.

Let's fight it out in another ten years.

(Why are you implementing your stuff in C++ anyway?)

(Hah! Bellyfeel, Blackwhite, Crimethink, Duckspeak, Goodsex, Ownlife, and Unperson. Wonder what I am (not) guilty of? Probably everything.)

I Like Haskell (but not exclusively)

I like Haskell, but that does not mean I don't like other languages, and I am not arguing for Haskell as a language (as this thread is about the politics) I am arguing for the way Haskell as a language was created, and supporting the reason for it. I am not suggesting there should be a single language, and I like domain specific languages. There is room for both C++ and Haskell. I am not sure we need both Haskell and ML though ;-) I generally prefer Ada to C++, however I have had (brief) dealings with the standards committees of both C++ and Ada, and neither were interested in the kind of generic programming I wanted. C++ won't break backwards compatibility (was to do with overloading and specialisation), and Ada seems to be focused on getting more OO features and seems uninterested in improving support for generics (was to do with generic dependencies). I have since found workarounds for both these problems.

In answer to your question regarding why C++, because it has the widest compiler support on the most platforms (other than C), and there are no license restrictions (the free Ada compilers are all restricted in one way or another) and because programming in C without polymorphism and generic containers (like vector, set, map etc) is too painful. Using templates I can get Haskell type-class and polymorphic style behaviour, and using inheritance and virtual functions I can get algebraic datatypes.

Your comments are very strange

I find Haskell a dead-on-arrival language. I neither believe in laziness or purity and I certainly don't agree with monadic abstractions to enable programming in the large. Neither do I believe in type classes as a software engineering principle.

I think laziness is a mistake, too, but I have to say I find your rhetorical strategy utterly bizarre. You can't sensibly say Haskell won't work for high performance computing, when there are numerous people building high-performance systems in Haskell. For example, McNettle, the world's fastest SDN implementation, is written in Haskell.

World's fastest X often

World's fastest X often depends on the competition more than it does in qualities of the language. Is Haskell seriously being pushed for high performance with benchmarks to back that up?

In this case, seems to be yes

If memory serves, the McNettle work appeared in SIGCOMM, where they surely do not care about Haskell, but do care about SDN.

The key point is that an awful lot of effort has been poured into making GHC scale out to a lot of cores, with a lot of work especially on removing concurrency bottlenecks in both GC and IO.

Energy Bill

It may pan out for GHC but taking on Go where that is primarily engineered as a closer to the machine language partly aimed at keeping Google's energy bill low might, in the long run, mean they'll lose.

The only thing going for them is the observation that if you keep processes pure you can optimize better for multicore at the cost of copying sometimes.

But, my bet, they don't use laziness but make everything strict, the programs are too small to be bitten by the monadic abstraction, instead of abstract data structures they are forced to C-like bit level representations somewhat peculiar and slow to handle in an FP style, the garbage collector needs fine tuning and they hardly use (lazy) higher-order programming in this setting.

So it works out. But they have too much against them since all of the above is better, or doesn't need to be, handled in Go or C at the cost of more development time. (Or possibly, the cost of rather silly engineers getting the abstractions wrong since they have more choices.)

Maybe Go doesn't get GC or data migration right in time, that would pan the contest in GHCs favor. But the rest is against them.

Higher-Order Function Migration

The Clean group, around twenty years ago, experimented with seamlessly migrating higher order functions, computations, and data around clusters.

Somewhere, for research purposes, that would make more sense since that might enable new distributed computational models beyond stream processing data.

Though I've got no idea why you'ld want to migrate a hof but they're smart enough to think of something I guess. Administration and deployment could be interesting though.

occam

The occam-π crowd (part of the old Transputer community) also experimented with "mobile processes" that allowed computations to be migrated. I think they found some uses for it (see http://www.cs.kent.ac.uk/research/groups/plas/projects/cosmos.html, although I can't say whether there was a genuinely compelling use case or "killer app".

I think relational databases

I think relational databases are a pretty good killer app. Being able to write a local query and have it be migrated closer to the data is compelling. SQL is just a restricted version of this general model.

Indeed they are

Your comments are very strange

Yah. As an exercise in humor I do try to present my arguments diametrically opposed to academic peer-pressured narcism.

You can't sensibly say Haskell won't work for high performance computing

I never made that claim. But, now you mention it, I am willing to support it.

numerous people building high-performance systems in Haskell

Reversal of an implication. Applying A to B doesn't imply that A is the better language. It's interesting as a research experiment but, yah, they're probably building a server despite the language. The higher abstraction probably buys them development time, which is something too. But the same design in Go, or C, likely beats them.

McNettle, the world's fastest SDN implementation, is written in Haskell

Cherry-picking. As I said, it's an okay language for academic exercises.

Apple is notorious for pulling rugs out from under developers

Processor changes from 68k to PowerPC to Intel.

The migration from the pre-Darwin MacOS to OSX.

These changes were beneficial to end users, and probably good for developers in the long run, but in the short term all very disruptive.

(MS does these sorts of things to, though less drastically).

ObjC to Swift could easily be a similar thing; especially since Swift is a new language that few developers have experience with. ObjC, for all its warts, did have a strong developer community, even if it was nowadays only used for two platforms (iOS and Mac).

ObjC replacement

Could also be that they decided that in the long run they need to ditch ObjC (in MacOS).

Pushing on other languages

Already things like haxe are talking about how they should better support non-nullable types, etc. Which is good and bad. Bad because it pisses me off that people only talk about it after they think Apple validated it or something.

Conclusions

1. We managed to have a fairly civil discussion on hot button issues.

2. We, as a community, don't have a clear, shared and well-founded picture of the pressures and incentives of language design in industry.

A million vague motives

You're not going to abstract over a million vague motives of thousands of people that well anyway.

There are a lot of notions not mentioned yet in this thread which haven't been touched upon, like money and programmers aptitude, motivation and skills.

Almost everything is vague which is a good thing for my argument: We need a rich ecosystem (and let Darwinism take care of the rest.)

(Not that I have to do a lot about it. A million vague notions will make sure that there is a rich ecosystem anyway.)

Well...

1) This community (in which I drop in from time-to-time) has consistently-enforced policies and standards which keep flamewars from spreading too far.

2) I'm not sure anybody does; other than to not that many of the things that motivate the academic posters here (many of whom are grad students or researchers in PLT), are Greek to most industrial programmers. Most of paid programming isn't about solving deep technical problems, but of gluing stuff together.

At any rate--industrial programming languages seldom come from dedicated language shops. Out of all the post-Java "app" languages (languages targeted toward large high-performance apps such as games, as opposed to things like business logic, websites, numerics, enterprise glue, or other such things), I tend to like Walter Bright's D the best. Despite it's technical strengths (IMHO), it doesn't see much adaption.

Most successful (in terms of use) PLs come from system vendors. A few come bundled with applications, and occasionally escape the original purpose to be used elsewhere. The open source community has done well producing scripting languages, far less well producing languages requiring a more complete toolchain and stack. And all major computing environments support C/C++ as a fallback, with Windows and Unix derivatives comprosing the surviving operating systems.

conclusions

3. highly nested comments do not seem to dissuade some kinds of discussions, even when 90% of horizontal space is indentation

stateless reasoning

Sort of popping up a response to that thread. This is a subjective interpretation of programming history:

I think the classical approach had an idea of building programs as pretty straightforward state machines. One would never write, in this style, a side-effectful program bigger than one's head. A program of 20K lines that is one big, hairy, state machine is probably too big.

You would use static reasoning to reason about your state machine (such as, e.g., making time or space complexity arguments; or establishing termination under certain conditions).

If you need something bigger,you look for ways to compose pieces of that sort. There's a constraint on what kinds of compositions are allowed. Ideally, the composition glue must (also) be simple state machine programs no bigger than your head. And there's an independence requirement that the composition glue doesn't in any way alter the behavior of the subsumed state machines. The subsumed machines know nothing of each other. They only know what they knew to begin with: that they have (simple) internal state, input, output, and transition rules.

The unix pipeline style is a particular example of this pattern but by no means the only one. Composed machines don't necessarily have to be separate processes. Interfaces must be based on some abstract sense of state machine i/o but they don't have to be byte streams. Compositions don't have to be linear.

In my personal experience I associate the loss of this good style with the time period when OO started to become popular and when one team after another copy-cat re-implemented essentially spaghetti-code "GUI" frameworks and apps.

Which brings us to the modern web browser....

thread won't die

even when people point out that swift has no clothes (or at least skimpy stuff) i figure it won't be enough to make people yearn for a REAL language.

so Swift will do more harm than good because: (1) apple's obj-c environment was so bad anything looks better; (2) the fan boys will assume it is god's gift; (3) love it or leave your job since in 5 years everything done on ios will be swift by default; (4) so lots of people's brains will be poisoned into thinking this is what Good Languages look like.