Experiment

Can we have a polite, orderly and short discussion of the political, social, institutional aspects of the Swift language? This thread is the place for it.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

PL designers will be in

PL designers will be in vogue again and grab some attention of the vast attention going to the deep learning people....for a couple of days. There will be more competition in the PL space, hopefully driving more innovation, and more employment oppurtunities form said PL designers.

Apple seriously wants their ecosystem to be special, to distinguish itself from android, lock in developers, and prevent code from being shared between the two. It will be interesting to see what google does, as Java is old in the tooth and neither dart nor go are suitable as app languages. Maybe they'll adopt mono-C# :) JavaScript/web is sufficiently inexpressive to matter much in the app space.

Can languages really lock in

Can languages really lock in developers? Can't languages be compiled to any target? Discuss.

Who is going to bother with

Who is going to bother with a swift dev support (more than just an output target) for their platform though? Well, I can see google maybe doing it, especially given that this puts them in a spot right now.

Mono + Linux

Although you can run Mono on Linux, it is only the core .Net platform, without all the useful libraries. As such it only offers portability in name but not in practice. Its like all the Linux Mono apps will run on Windows, but not the other way around.

Platform already does the lock in

People writing iOS apps are already locked in by Cocoa, etc. Besides, Objective-C might as well be Apple's own private language anyway. I don't think that a new language is creating any new/additional/stronger lock-in.

Multi-platform applications

For some reason some developers get away targeting multiple platforms. Honestly, I have no idea how they do that.

C or C++?

C or C++?

Yah

I guesstimated C++ for a particular software house and looking at their hiring positions that seems to be correct.

cheap labor negates lock in

Swift is designed for use by cheap labor. It is designed in a way that I think most working programmers could pick it up well enough to start using it almost immediately. The quantity of code in a typical "app" is not very large. "Lock-in" can't function here.

By contrast, a commitment to use (say) Haskell or Lisp can create lock-in because there is a scarcity of qualified labor competent in those languages available to some sectors and because those languages are meant to be used in larger systems. If your firm is relying on a very large lisp program, for example, you are pretty much locked in to an expensive labor market.

Apple's quirky language, in combination with other factors like developer fees and non-standard APIs -- these can all add to the marginal cost of writing for or porting to Apple's platforms. See my other comment about "Apple seeks to raise devlopment costs".

api licensing

I think de-facto lock-in would not be difficult. There is already recent legal precedent for copyrighting of APIs. And what is a standard set of libraries, if not an API? And who knows what will happen if the programming language itself becomes considered a creative design suitable for copyright (certainly, one could develop a convincing argument for that).

Apple hasn't yet revealed its licensing plans for Swift.

api licensing

Let's not count that as a precedent yet. There is still a lot of legal fighting to be undertaken yet. If it does eventually hit SCOTUS, it may well be another kick in the head for CAFC.

The language itself can be copyrighted. There are a number of those already. But as has been found previously, copyrighting a language limits your user base to fairly specific niche markets.

For more general use languages, copyrighting will only work if you user base is significantly large enough for you to have the upper hand. Feedback I have seen (in other fora) has seen a marked increase in moving from Java and the JVM because of the legal fight over whether or not an API is copyrightable. This migration may only be small yet but if the right people get involved then entire companies can be changed as to the environment that will be used for development.

Lock in

I think it will be very locked in in practice. Apple will probably open source the basics of Swift, same as Objective C is part of GCC. At the same time though... Apple is tying the language tightly to XCode for development example playgrounds. They are going to tie the language tightly to Cocoa libraries, "seamless access to existing Cocoa frameworks.” Most large Swift applications will probably have some Obj-C mixed in with even greater platform dependencies.

I suspect that Gnustep (http://www.gnustep.org) will port Swift over but the port at best will be effective for code at something like a decade behind the proprietary version and not support much new code in practice. Fine for academia (which is what LtU focus on) but for commercial Swift will never move.

Dart?

Why isn't Dart suitable as an app language? It seems to be the existing language most similar to Swift. Is it a matter of libraries and focus? Isn't their focus large-scale web apps?

Dart comes from the

Dart comes from the JavaScript/web perspective, and it would have to grow another app oriented ecosystem that wouldn't resemble its web ecosystem. Related, Microsoft tried to make JavaScript an app language in WinRT, but it hasn't really caught on. So it is possible, but it's not clear that is the best approach.

Robert Harper causes a storm

Robert Harper caused a little storm on Twitter, saying that "real" PL experts should be hired. He was accused of applying an ad hominen (which I don't think this is) and apologized. I think his claim has deeper truth than that, though. Whether language design is driven by attempts to push the envelope and advance the state of the art or by other considerations is surely an issue. I think the emphasis should be put on what Harper calls "remaking the old mistakes" rather than the less than optimal use of the phrase "real experts".

What mistakes?

I'm be curious to hear what he believes the actual mistakes are, so that a reasonable design discussion can be had. One problem with Twitter is that complete agreement requires only a single bit: Click "favorite". Meanwhile, even the smallest disagreement often requires much more than 140 characters to communicate.

I'd be surprised if he

I'd be surprised if he didn't think the whole OO thing was a mistake. I can't find too many novice mistakes with the system.

Can't undo that "mistake" now

The entire Apple software stack depends on it.

Vendor languages are a problem

I am not a fan of vendor specific languages. I have problems with the idea the universities could teach these languages, and become part of the marketing machine of a corporation. (Public money should not be used to favour one competitor over another - and corporate funding of education seems wrong, as learning should be an end to itself, in the tradition of universities). I am certainly not against vendors running their own courses on their languages using their own resources.

It also results in wasted effort as C#/Java/Swift are all looking at the same problem domain.

When academia had similar problems with the proliferation of lazy pure functional language prototypes, the solution was to all get together as a committee and write one language for everyone (Haskell). Is it time for industry to so the same?

Finally I am not a fan of languages designed by committee as they tend to be big and unwieldy (Haskell, Ada etc... actually Haskell is okay as they only accepted the really well established common bits, is it the exception that proves the rule?). Some of the neatest languages have been the work of a few people ML / C / Pascal.

So what we need is a small team of people who are platform / OS neutral to produce a Java/C#/Swift type language, and release it as an open-standard and persuade all the OS and system vendors to adopt it.

Ada was not designed by a committee

Ada '83 was designed by Jean Ichbiah, and Ada '95 was designed by Tucker Taft. Of course they had input from other people as well. The Steelman requirements to which Ada (and three other programming languages) had to conform were, on the other hand, created by a committee.

I was thinking of the requirements

It seems I was thinking of the requirements...

You made me choke in my coffee

When academia had similar problems with the proliferation of lazy pure functional language prototypes, the solution was to all get together as a committee and write one language for everyone (Haskell).

Yah. Okay.

So what we need is a small team of people who are platform / OS neutral to produce a Java/C#/Swift type language, and release it as an open-standard and persuade all the OS and system vendors to adopt it.

Well. I completely disagree. What we need is a healthy ecosystem.

Why?

So what we need is a small team of people who are platform / OS neutral to produce a Java/C#/Swift type language, and release it as an open-standard and persuade all the OS and system vendors to adopt it.

Java is an open-standard and platform independent. That being said, why would OS and system vendors (mostly) want to do that? What's in it for them. Apple's programming languages exist to help sell Apple hardware. Microsoft programming languages exist to sell Microsoft OSes and server products. IBM vertical programming languages exist to sell expensive IBM solutions especially consulting and management solutions. Etc...

Java controlled by license

Java is not open. See Dalvik and Google implementing not-Java.

Languages do not exist to promote vendors products although vendors may wish they did.

I do not want to have to reimplement my software in different languages for different platforms. It's hard enough dealing with platform differences. In a free market for hardware software needs to run on all platforms. In a free market for software all hardware needs to run it. I think trying to control both is close to anticompetitive practice, and probably should be regulated. Like in the UK you cannot own both trains and the tracks, or supply electricity and own the wires.

persuade

@Keenan

You had commented about, "persuade all the OS and system vendors to adopt it". The advantage you give above is that it assists in the commodification hardware and OSes. That's precisely right. Being commodified is something that vendors try and avoid not something they try to promote since it results in competition mainly on price. Platform vendors want software to be a differentiator they don't want a situation where all hardware runs all software. You see exactly that situation in the Wintel market and the result was a complete collapse of margins leading to a diminution of quality leading to collapse in price i.e. further loss of profits. That's a disaster from platform makers perspective. The advantage you are presenting is a disadvantage for them.

If you are Google, that is if the goal is to sell advertising not hardware and OSes then commoditization the platform tis a good thing. But that's precisely because their interests lie in collapsing the margins on their platform. Now in terms of regulation it may or may not be in the government's interest to regulate platforms and opensystems. I tend to think the exact opposite as you do. Regulations that require platform X's software on platform Y require either that X implement all of Y's standards or that X not innovate. Obviously you can't have a situation with more than a very small number of vendors who each are required by law to implement each other innovations unless the innovations are regulated out of existence, the latter case.

I think platform advancements are vastly more important than software portability. In a situation where platforms are highly differentiated all that exists in the industry is that software has to undergo a complex migration process to cross over platforms. Which means that within each platform you have a "free market" (as you were using the term) and then another "free market" for platforms where the software available becomes part of the decision process. That IMHO is a net positive because there are tradeoffs between platforms. A good example of this is the collapse of the middle of the market in tablets. Where we now have 2 highly differentiated platforms:

Android offering a rapidly advancing set of functionality for highly price sensitive customers aimed primarily at video playback (i.e. an Android tablet is mostly a handheld television that also runs some software)

Apple iPad offering a rapidly advancing set of functionality for semi-price insensitive customers aimed primarily at replacing laptops (i.e. an Apple iPad is mostly a low end laptop with excellent battery life optimized for short interactions)

Taking that example why should we expect those two customer bases to mostly even want the same software? Most software is either going to be targeted towards people who want high service levels (semi-price insensitive) or good pricing (price sensitive). In a free market we wouldn't want and shouldn't expect much cross over.

Funny, I use my iPad more

Funny, I use my iPad more than my laptop now, which I still use a lot of. Replace "short term" interactions with browsing, and you might have something accurate. It's nothing like a laptop unless one is crazy enough to throw a keyboard on it.

Is there a tablet that is even cheaper than the iPad for the same feature set? Being semi price insensitive has nothing to do it when everything else is junk.

Use iPad more

@Sean

Replace "short term" interactions with browsing, and you might have something accurate. It's nothing like a laptop unless one is crazy enough to throw a keyboard on it.

Usage numbers show that people are replacing computer interactions with iPad interactions. Usage is decreasing for laptops and desktops because of replacement. Browsing is obviously a terrific example of something people can do on both devices. Video conferencing especially distributing meeting documents and in meeting annotations is another. Photo organization...

Is there a tablet that is even cheaper than the iPad for the same feature set?

If you mean hardware Google nexus tablets: http://www.google.com/intl/all/nexus/tablets/ are a good example. But in general no. That was my point. That Android tablets are thriving at the $75-150 price point with a much lower feature set. The tablet market at this point has effectively segmented into two distinct products aimed at two distinct user bases with striking different preferences.

Being semi price insensitive has nothing to do it when everything else is junk.

Saying I'm willing to pay much more for a superior experience is being semi price insensitive.

Usage decreases for laptops

Usage decreases for laptops as what many were doing with laptops was more appropriate for tablets. They were not laptop interactions in the first place, we just had nothing better to do them with before. Turns out, we are mostly a passive consumption-based being, which is fine.

Poor people remain poor because they are forced to by junk that has to be replaced more often. In the long run, they are actually losing money, so buying something nice and sturdy is actually more economical.

You make it sound hard.

But its easy, just everyone develop and share the language between vendors, and put cool stuff in libraries.

I am not asking for lowest common denominator, I am asking that business logic be shared between platforms.

Just a common language. No need for identical APIs, but of course stuff that everyone had like basic file access would be handy if it could be standardised.

A further thought is that if software vendors interests are for a single application source-code (which I think they obviously are), then which hardware vendor aligns best with their interests? It would seem to be Google. In that case apple had a window of opportunity to lock the market in before android becomes good enough for the average user.

I don't get the app divergence thing. Enterprises want mobile access to their systems, so the enterprise determines the required functionality and it gets implemented on whatever hardware, or if BYOD all popular hardware.

Most people regard fragmentation of Android as a bad thing, so fragmentation of the mobile market is the same, a bad thing. I don't think you can say one is bad and not the other.

JavaScript?

Isn't JavaScript our lowest common denominator language? It runs pretty much everywhere and people use it *because* it runs everywhere, despite its flaws.

Doesn't have full access

The comment about making it sound hard was in reference to the argument that somehow improving platform features makes cross platform compatibility impossible. This is of course a false dichotomy, one does not preclude the other.

JavaScript does not give access to all the platform and OS features, and does not allow a 'native-app' feel. I don't want lowest common denominator, I want full access to platform features and cross platform development from a single language. So you can use an iOS date picker on iOS, and an Android one on Android. However when you consider that JavaScript development is cheaper than native, I do ask myself why bother with native when most of the apps do not need any native platform features. Many apps are now actually hybrid apps with a minimal platform native shell, and content in HTML/JavaScript/CSS, using an embedded WebView component. Apple tries to make this hard though by not allowing easy calls from JavaScrpt though to native code. There are some hacky solutions using URLs that are used by various products, but just allowing native code to register JavaScript callbacks like Android does would be much neater.

Another alternative to JavaScript is C++ with a wrapper for iOS and NDK on Android. Using the unix networking and file access and Open-GL ES graphics you can get some way towards cross platform mobile with good performance.

Mostly Android and iOS versions are implemented separately following the individual platform UX guidelines, from a single master design generated by iterating the UX with the client during requirements capture.

Dynamic Binding

It seems clear that C# and Swift both try and embed the platform dynamic binding into the language so that plumbing of such things can be hidden from the language user. There are dynamic scripting languages like JavaScipt and Python that also hide this kind of dynamic plumbing and can be used as part of an app on iOS and Android via an embedded interpreter.

There does seem to be a lack of an independent compiled language with support for dynamic binding of objects. However even if there was an open language similar to C# and Swift, unless the platform vendors support it, it seems unlikely to get any traction.

Regulation and Free Markets

You clearly misunderstand the regulation aspect. Of course you would not regulate that vendors implement each others innovations, that would be ridiculous.

The regulation would be that you cannot own a controlling interest in both a hardware platform and a programming language (or something like this). Apple could easily comply by getting Swift ISO standardised, and relinquishing control to the standardisation body. I am sure some suitable regulation could be developed by regulatory experts that is much better than my amateur musings.

I also don't see how your use of the free-market concept works. In free market, the user who has any hardware device, is free to chose any software they like, unconstrained by artificial restrictions on that market, whether they be imposed by government or vendor (Apple). Also the user of any given software is free to chose any hardware (without the software license preventing change of platform). Your use of the terms just does not make sense. Its like saying "Freedom is slavery". In your case saying software choice is restricted by your choice of hardware platform, or your hardware platform is restricted by your choice of software = freedom? Traditionally hardware choice is driven by software, so I can see the attraction of making certain software only available for your platform, and it is clearly in the vendors interest, but it is also clearly not in the interest of the end user, nor the independent software developer.

regulation

The regulation would be that you cannot own a controlling interest in both a hardware platform and a programming language (or something like this). Apple could easily comply by getting Swift ISO standardised, and relinquishing control to the standardisation body. I am sure some suitable regulation could be developed by regulatory experts that is much better than my amateur musings.

Objective-C isn't owned by Apple. Objective-C is incorporated into both LLVM and GCC. It is Cocoa that Apple owns. I'd expect the licensing is going to be the same for Swift. Apple isn't going to own Swift de-jure just de-facto.

. In your case saying software choice is restricted by your choice of hardware platform, or your hardware platform is restricted by your choice of software = freedom?

Well yes. You choose the function then the software you want to run and then choose appropriate hardware to run it. The freemarket is allowing the person to choose freely from available products. Software designed to run on a supercomputing cluster cannot be made to run on a coffee maker. And in the reverse direction a supercomputing cluster has no way to boil water. PBX software is going to require hardware that can hear and generate channel associated signaling. There is no way to avoid a tie between function software hardware.

and it is clearly in the vendors interest, but it is also clearly not in the interest of the end user, nor the independent software developer.

I don't think that's clear at all. End users seem to benefit quite strongly from more heavily integrated systems and when given the choice mostly choose vertical integration of de-integrated component systems. There is a reason for that.

As for not in the interests of ISV's again I don't see it. The vast majority of ISV's exist because of vertical integration drops the cost of their little add-on enough to make it possible to develop and market their products. The SharePoint ecosystem wouldn't exist with SharePoint. The flood of mobile applications exist because Apple, Google and to some extent Microsoft and Blackberry created rich vertical ecosystem. We don't have a rich and diverse collection of software to run on interstellar transport devices because there are no interstellar transport devices for them to run on.

wrong approach

This assumes developers exist to create SharePoint applications. This is wrong. Developers (independent or otherwise) exist to service the business needs of their clients. What I care about is the increased cost of maintaining multiple versions of software on different platforms. I also care about getting significant market share for products in a fragmented market that necessitates multiple versions. You seem to have no answer for these real world problems. Talk of market theory, does not help my bottom line. Do you have any practical suggestions to help businesses in this market?

I disagree about the tie between hardware and sourcecode. I can write software that will scale from a phone to a supercomputing cluster using unix/POSIX C++ and MPI that just requires recompiling for each platform. The PBX point seems inaccurate, we run VOIP linux PBXs here with no special hardware at all in the server they are just regular linux VMs in our rack.

Freedom to create restriction is not freedom, this is the Russell paradox incarnate. Giving apple the freedom to restrict peoples software choice is akin to giveing slavers the freedom to make people work for nothing (a bit dramatic I admit, but it is the same application of misusing the concept of freedom in a restrictive sense). As such restricting Apple's ability to lock people into the platform is increasing freedom (freedom to restrict is not freedom, but restricting a restriction is). It would appear this is the understanding behind things like the unlocking legislation, that restricts carriers from preventing people unlocking their phones, giving freedom to change carrier to the end user. How is freedom to change carrier without changing phone philosophically different from changing phone without changing software?

If apple does not own swift, who controls the standard, and ensures comparability between apples version an independent implementations (if they are even allowed)?

From phone to cluster

I can write software that will scale from a phone to a supercomputing cluster using unix/POSIX C++ and MPI that just requires recompiling for each platform.

No you can't. You can write some software but getting the mapping between hardware and software right is usually a hard problem. Black art often.

OpenMP & MPI

OpenMP can be used for automated parallelisation on an SMP machine, and the code looks pretty normal. MPI requires coding to an API, but it can still run on a phone, although it would be pretty useless on a single core one. Plenty of phones are quad-core though, and could run stuff written with MPI, in any case you code the algorithm once, and provide the processor and host configuration at runtime. This would work fine on any of the really big shared-nothing linux supercomputer clusters.

I don't think a reference to MPI on supercomputers is necessary. Here is one for phones:
http://www.scientificbulletin.upb.ro/rev_docs_arhiva/fullffc_583765.pdf

business models

This assumes developers exist to create SharePoint applications. This is wrong. Developers (independent or otherwise) exist to service the business needs of their clients.

Clients have already chosen a platform. If their workflow is SharePoint oriented then absolutely the developers (small ISVs) who are engaging with them exist to extend that SharePoint workflow. You are part of an ecosystem not the entire thing. So if you are developing for a particular client(s) then you develop to whatever platform they want. If you are developing as an ISV then you develop to an ecosystem where there is demand. If your product does really well on that one platform you move to others. And for each and every single one of them you write a quality application.

Abstract where you can, but don't where you can't or shouldn't.

You seem to have no answer for these real world problems. Talk of market theory, does not help my bottom line. Do you have any practical suggestions to help businesses in this market?

My answer stays the same for the whole thread.
a) If the applications would work well lowest-common-denominator then use a lowest-common-denominator approach example PhoneGap, HTML5... All platforms support this.

b) If the application benefits from platform specific features write to the platform. Write in a platform dependent way. If the platform dependencies can be abstracted maintain multiple versions. However often huge chunks of the application will often be cross platform which is where LtU type discussion comes in. For mobility hybrid solutions like Xamarin offer a nice way to achieve mostly platform independence.

There is no problem. You are just conflating two entirely different problem domains. The reason generic doesn't work mostly is because the end users want the advantages of platform specific. And that's going to raise your cost, so you pass the cost on. If you aren't being platform specific you aren't solving their problems. That's the practical suggestions. Do what the platform vendors and your end users want you to do. Do it the right way not the fast, cheap way. Stop being lazy.

The PBX point seems inaccurate, we run VOIP linux PBXs here with no special hardware at all in the server they are just regular linux VMs in our rack.

The specialized hardware is then the phones which are doing the SIP conversion and at the carrier which is doing the handoff. You have just outsourced the hardware that does the PSTN handoff. That BTW is a good example of how well platform specific is working. You were using a platform specific approach and it worked so seamlessly you aren't even realizing it. The carriers are doing exactly what I'm advising above.

Giving apple the freedom to restrict peoples software choice

Apple is not restricting anyone's software choice anymore than Coke does when they won't sell you Pepsi in their bottles. Apple customers want Apple hardware, Apple OSes and vendors that are willing to support the platform. You obviously aren't willing to support that platform. And by supporting the platform I mean immersing yourself in the culture and writing to the platform. Apple offers an interfaces for generic software that isn't platform specific they do exactly what you want.

Your problem probably is that customers prefer the platform specific code and you lose the sale when you write generically. Well good. Apple created a market for mobile software that's high quality. And by high quality one of the components is fitting the platform the users are engaging with your application on.

Getting Closer

So lets get back to the original point about Swift and vendor specific languages, as I have answered most of the other points in my response to your other post below.

It would appear you agree with me. You say "However often huge chunks of the application will often be cross platform which is where LtU type discussion comes in. For mobility hybrid solutions like Xamarin offer a nice way to achieve mostly platform independence.". Great, thats exactly my point, and is the reason why a non-vendor specific language is better. Xamarin will not offer as good integration with iOS as Swift, so is a less optimal solution.

I think a non-vendor controlled language would be better for Apple, better for the developer, and better for the end user. Apple has a good record with LLVM and Clang, so lets see what they do with Swift. I would settle for a benevolent dictator if they don't want to standardise the language, but I still think standardisation would be even better though.

Swift licensing

If apple does not own swift, who controls the standard, and ensures comparability between apples version an independent implementations (if they are even allowed)?

Anyone who wants to can create a version. And if they want to they can create a standard. That standard can either match Apple's implementation or not. If it matches there is compatibility if not there isn't. In practice there likely won't be compatibility because the reason to use Swift is to use Cocoa.

In practice there is only going to be one version of Swift that anyone really cares about. I would imagine GNUStep will create a version that allows for porting that will lag Apple by about a decade.

Requires Standardisation

I should clarify, that Apple controlling their version of Swift without any need for compatibility with other versions would need to be counted as a controlling interest. There needs to be a common independent standard, which Apple does not control.

No high hopes here

I would imagine GNUStep will create a version that allows for porting that will lag Apple by about a decade.

It took both the ML and Haskell teams a gazillion years to get any performance out of reasonably mediocre abstract languages. Not to mention Javascript.

I have no idea about the performance of Swift (what's in a name) but considering the semantic gap between that language and C it might die a pretty rapid silent death if they don't get the performance up in time.

Since the creator of LLVM is

Since the creator of LLVM is the principal on swift, I think there is a lot of hope that they'll take compilation and performance seriously.

Yah

That's a good sign.

Apparently swift is

Apparently swift is currently very slow: https://stackoverflow.com/questions/24101718/swift-performance-sorting-arrays

460x slower than C++ with optimization. I'm not sure what's happening here, hopefully it will be fixed in the future.

Look at the thread again

That's an excellent article link. Since the original post it became clear what the problem was, the checks for integer overflow. Take a look at the later articles. Compile without overflow checks and Swift was slightly faster than C. With them Swift could end up about 1000x slower (though that's likely to improve somewhat between beta and release). Amazed that operation is that expensive, so I gotta assume that there is poor code there and Apple is going to be able to figure out a way to make Swift safe and not too much slower.

Yes, there is no way that

Yes, there is no way that overflow checks make a program 500x slower. That would mean that 99.8% of the time is spent doing overflow checks. There must be something else going on that causes that much slowdown, so it's likely that performance will improve a lot when that gets fixed.

No too bothered. Ship optimized code

After my hasty remark on the speed of Swift I decided to read up on it and the array performance doesn't bother me that much.

If it's safe, performance degrades remarkably, that's true. But the good thing about it is that they seem to be within a few factors of C speed if compiled with full optimizations.

So I guess most developers will debug, where they can, with safe settings and ship the optimized code.

The problem with that scenario is that I expect that at some point, developers will just compile without array bounds checking most of the time because they'll need to tune and debug to the response of their applications as they will be shipped. Which means array bounds checking may become a nice but dead, as in: hardly used, feature in the language.

I wonder seriously if

I wonder seriously if overflow checks are feasible in non-debug-mode code without hardware support. If the check was added to the CPU adders, then it would be much more feasible.

Overflow checks can't be the

Overflow checks can't be the problem. There is no way that overflow checks cause 500x slowdown. Even 2x would be high overhead. CPU adders already have overflow checks btw, e.g. x86 has the overflow flag bit.

It depends at what level

It depends at what level they are being applied. There are a lot of integer operations in a typical program, most of them incidental not written by the programmer themselves. However, it is difficult to reason about where an overflow can occur, so eliminating checks on these operations might be very difficult. Perhaps this is something where they will get better quickly.

I'm not sure how the overflow flag would be used in Swift's assembly code (or even if, since ARM is in the spotlight here). Its worth studying to those who are interested in the problem.

I don't think there is any

I don't think there is any level at which they can cause a 500x slowdown. Even if what the program is doing is 100% math operations that need overflow checks, if you look at any individual operation, how can the overflow check take 500x longer than the operation itself?

No register + branch prediction + cache miss

Maybe the overflow check doesn't use a register but checks a bit somewhere in costly memory? Then there is the extra cost for the extra instructions, messing with the branch prediction, and a possible cache miss on prefetching whatever wherever the handler is located. (And an extra possible cache miss for getting an array size in case of array bounds checking.)

It depends on the translation and hardware. No idea about their translation; no idea how hardware works these days.

If they cannot get rid of the 500x slowdown I think they should get rid of these checks and move them to a debugging switch. I.e., opt in instead of opt out.

I tried to think of a way to

I tried to think of a way to make an overflow check cache miss but I couldn't come up with anything. It's probably something else, like, they are recording the history of the execution for their timeline debugger.

Shouldn't happen too often

If the branch predictor assumes it may overflow and you have a jump, or call, to a location which isn't in the cache it might happen (since the routine being referred to needs to be loaded.) That shouldn't happen too much but it depends on the predictor and the various cache sizes and behavior.

But it really depends on the code they generate.

For a first compiler they might have opted to implement the checks, or even operators, with subroutines so then my argument is moot anyway. Maybe I assume far more optimized code than they have at the moment.

Loop unrolling?

There's loop unrolling too. Forgot about that one. Usually worth it but may mess with everything too unfortunately.

Peephole optimization?

Yet another one I forgot. I wouldn't know how you'ld mess with that one but it may be that if everything is in subroutines in their normal compilation they can't do the optimizations they'ld normally do on a simple loop.

The timeline debugger is

The timeline debugger is only active in Playground, it seems. Also, they seem to have frame-by-frame recording/checkpointing.

Someone debugged it: Retain Release

It's their reference counting GC it seems. Someone debugged it, it's somewhere in the answers; we all should read less sloppy. Looks like one of the example programs spends most of its time doing the updates to the reference counters.

If I got it right, this is the code they toyed with:

let n = 10000
let x = Int[](count: n, repeatedValue: 1)
for i in 0..n {
    for j in 0..n {
        x[i] = x[j]
    }
}

The code above is, of course, assuming boxed values, a nightmare for a naive refcounting GC. All it does is retaining and releasing, I imagine, the same value. (It may also release and retain the counters, no idea.)

Wouldn't know

I wouldn't know. The minimum cost are extra machine instructions which you could get rid off with your proposal of extra hardware.

But where are you going to store the array size? The beginning of the array seems the most logical and what size should it then be? The cost goes down if bounds checking is much smaller than swapping in pages; but to access the size implies extra paging.

Looks like there will always be some scenarios, like DSP or imaging, where the extra cost of array bounds checking will be prohibitive.

Array bounds checking is not

Array bounds checking is not the same as integer overflow checking. The conventional wisdom is that array bounds checking rarely leads to unacceptable overhead and is a good default, while software overflow checking is completely infeasible. I don't have any data of my own, so I won't make a claim. But honestly a couple of orders of magnitude would not surprise me. CPUs do an enormous amount of integer arithmetic.

Respond to Jules

You seemed to have overflown yourself here. This is a response to Jules' post above, not mine. I didn't discuss overflowing.

No, I was responding to you.

No, I was responding to you. You were the first one to mention array bounds checking, two comments above. Everybody else was talking about integer overflow checking. It seemed to me that you changed the subject.

The Stackoverflow post

The stackoverflow post attributes the 500x performance degrade to array bounds checks though they were discussing integer overflow checks also/before.

It seems to have led to some confusion.

I see. I'm sorry to have

I see. I'm sorry to have contributed to the confusion!

Same deal somewhere though array bounds checking is worse

I expect both features to be implemented with a test, or comparison, a branch, and a jump to a subroutine if stuff goes wrong.

So the overflow check is less costly since I imagine that's a check on a register whether a bit is set, whereas the array indexing check is a load from memory and a compare instruction. The first messes with branch prediction whereas the second also messes with caching behavior.

So, on the examples shows, you find exactly that behavior back but the examples are also written as worst case scenarios.

Swift looks good from the perspective you can get close to C, and it also looks you cannot get rid of the slowdown (unless they did something really stupid.)

So we learned the same lesson again: Overflow and array bounds checks are costly in certain scenarios.

Triviality for a C compiler writer

It's unlikely that a team stemming from LLVM doesn't know about the trivialities of generating fast optimized code. Now programmers are upset but I doubt they are.

They opted to put these features in but I doubt the team wasn't aware of the potential cost. They may have been surprised by the actual cost of these checks though.

o_O

But honestly a couple of orders of magnitude would not surprise me.

It wouldn't surprise you if turning on overflow checks increased the overall run-time by two orders of magnitude? I would expect a non-naive implementation to be in the 2%-20% increase range.

Non-naive?

I didn't say "non-naive." I had in mind the most naive possible strategy that literally checks and branches around every single integer operation. I could imagine that degrading performance very, very badly.

Proven track record

As Sean mentioned below Apple is the principle on LLVM. Moreover they were the ones who took GCC's PPC performance up to the point that game consoles could use the PPC-CPU. I'd be pretty confident in their abilities. Further a lot of the craziness in Swift is coming from starting with Obj-C / Cocoa and working backwards. Swift is designed for mid level performance (on their platform) from the ground up.

Even better, Chris Lattner IS

Even better, Chris Lattner IS the LLVM guy, so he should know something about optimization. There is probably a larger team behind this, but technical leadership is also important. I don't think Apple will mess this up; they have the execution focus to make Swift a good programming experience, just like iPhones, iPads, and mac books are good end-user computing experiences.

Chris Lattner?

Do you mean?

Ya, corrected. I knew it was

Ya, corrected. I knew it was a "Chris", it will take me awhile to get used to the name :)

Initial problem with the language

Like C, Swift uses variables to store and refer to values by an identifying name. Swift also makes extensive use of variables whose values cannot be changed. These are known as constants, and are much more powerful than constants in C. Constants are used throughout Swift to make code safer and clearer in intent when you work with values that do not need to change.

When they start conflating variables (associated value can be changed) with constants (associated value cannot be changed), then I have doubts about any part of the language. I am already averse to reading any further. I think I would rather stick with a language like COBOL or FORTRAN (both of which I haven't programmed in for many, many years) for application development.

Wrong thread. There's a

Wrong thread.
There's a thread for discussing the language. This one is for the politics.

My comment was about the politics

I didn't think I was being that obtuse with my comment.

Let me put it another way. A company as large as this should well have the resources to have someone (who has a clue about language design and implementation) vet the documentation of a new language that they are trying to get developers interested in. To make, what to me is a fairly substantial mess-up, the quoted statement on the introductory page of their documentation says that they haven't created a product that will bode well for the future.

Companies that go down this path are essentially trying to have vendor lock-in. This is no different to ORACLE and Java API legal shenanigans or Microsoft and C# and the potential of interfering with Mono under Linux.

I am sorry to say that when simple things are screwed up like this, then the most likely explanation is that the marketing or legal groups within the organisations have got involved somehow.

My ultimate point, I suppose, is that leave bad alone. I haven't bothered to investigate the merits of the language, its available features, it uses or anything else about the language specifics. All of my distaste came about from simply reading the quoted comment on their introduction page.

Over many years, I have had to deal with many vendor specific languages, some have been good, some bad. The bad have been characterised by a lack of clarity, particularly in their documentation. Or in the case of one particular vendor, the removal of usability for the addition of "pretty".

Fair enough, I understand

Fair enough, I understand now what you had in mind.

Variables vs variables

Be careful there, you've fallen for a common confusion in the world of imperative programming. There, and only there, do many (but not all) people equate 'variable' with 'mutable'. In mathematics, a variable always has been a place-holder name, not a mutable entity, and so it is the case in other programming communities, e.g. functional or logic programming, and even in parts of the imperative world.

So you should probably reconsider brushing off the entire language just because you're unfamiliar with the author's use of (common) terminology.

Meta

That's an interesting point. It's a bit unfair, since obviously the fact that variables range over something is right there in the name. Why call it a variable if it's not, you know, variable? But in mathematics, I suppose a variable is usually actually more like a kind of meta-variable, ranging over possible values but in any particular case taking only a single one.

It's not so common in mathematics to use variables that implicitly vary over time, and in domains where it is common, such as physics, people commonly complain about confusing notation (e.g., using 'v' rather 'v(t)' for velocity, even when it really is a function of time).

Anyway, like I said, it's an interesting point.

So variables in math are not

So variables in math are not variable, but constant?

"variable"

In math the terms of a formula which are meant to be (essentially) beta-reduced, conceptually or actually, are sometimes called "variables" because they stand for an indefinite value. Even though they are called "variable" the value they denote does not vary within the scope of their meaning.

I haven't looked at Swift's details seriously. Is it the case that the value of a Swift (constant) variable can be undetermined until run-time and also unvarying within the dynamic scope of its binding?

math variables

They're variable.

Math does use symbolic constants, like tau (circumference of unit circle) or c (speed of light in a vacuum) or k (Boltzmann's constant) or e (Euler's number, related to natural logarithm).

The error was in the transition to imperative programming, which unfortunately conflates variables with mutables. That's another billion dollar mistake, IMO.

C (math variables)

C has const variables and people talk about them that way.

const variables

The 'const' keyword is useful, and I make heavy use of it in C and C++ code. When was it introduced to C? Was it C99? I know it was backported from C++, and that some ad-hoc conventions existed for it before then.

But adding const vs. mutable properties to variables is still coupling too many concepts, IMO. I'd argue the only languages that do 'mutables' right are those that cleanly distinguish mutable structures as a special kind of allocation orthogonal to variables and structured values, e.g. Haskell's `newIORef`.

const is old school too

I'm pretty sure C code like this was idiomatic in my 1983 copy of K&R:

static const char* my_names[] = {
    "foo",
    /* ... */
    "bar"
};

Some runtimes advertised intent to put such constants in readonly memory, so you'd SEGV trying to write them as lvalues. But absent such protection, const is a advisory status observed cooperatively, but with compiler static checks catching any failure to cast-away-const with a suitable cast expression.

Along with proper use of namespace and code organization conventions, const is a way to inform other developers which parts of code might be able to influence a computation you want to understand. The fewer places that can affect something, the easier it is to understand, because only those places need be studied to grasp original developer intent. Epistemologically speaking, smaller scope is better style.

It was ANSI C, via early C++

Const was added by "ANSI/ISO C" (a.k.a. C89/90), and accordingly, only appeared in K&R's 2nd edition (1988). It was available in some C compilers before that, though, who imported it from early C++. I believe C++ introduced const as early as 1983.

Const has always been an unfortunate misnomer in C/C++. It does not mean constant at all -- it means read-only. The difference being that e.g. you cannot write to an address through a const pointer, but it can still be mutated legally via other alias references.

Or via a cast

There are languages IIRC that have both "readonly" and "immutable" references. Only immutable objects can be passed to immutable references, and immutable means "nobody can change it, ever". Conversely, both mutable and immutable objects can be passed to readonly references, which mean "It can't be changed through this reference but someone else might hold a mutable reference to it".

BitC tried this and stumbled

Scott: We wrestled with this distinction between an immutable object and a read-only view in BitC and stumbled badly. If you know of a language that has managed to pull it off, I'd really appreciate a pointer to any paper or writeup that may have emerged.

Rust does this (n/t)

I said n/t.

SQL

I'd say SQL would be the best example. The language has the concept of views which quite often readonly and do quite complex things. The underlying objects are mutable.

read only view is then the return value of an expression which just reads
while an immutable object is a table (or table space) which cannot be written to.

re sql

I think it is more natural to say that, in sql, all query results whether from views or tables, are immutable values. Views are references to (usually mutable) queries. Used in a query, a view produces an immutable value in an unremarkable way. The query "underlying" a view may reference tables which can not be updated using the view itself -- but this is not the same thing as saying a view is a read-only reference to those tables.

Permissions

You are forgetting the whole permissions system and the splits. Views (particularly materialized view) are often running on different servers than the ones with the mutable data, they just get the updates. Moreover the people with view access don't have access to change the structures. IMHO going to the underlying query and saying that's mutable is like saying an immutable variable is mutable because the value is being stored in mutable RAM.

Hmm

the people with view access

I have no idea what that could mean in this context. I thought we were talking about programs.

In any event a client may or may not have permission to perform a certain mutation but this does not mean that SQL contains two kinds of reference to a view or that a view is a constant reference to any tables.

permissions

GRANT privileges ON object TO user;
REVOKE privileges ON object FROM user;

is part of the language.

re permissions

I don't see any way that supports your original claim and you haven't tried to explain. The relevant part of the thread starts with the comment titled "Or via a cast".

Permissions and SQL

It is pretty simple. SQL has a clear, effective, heavily used and useful distinction between immutable (for an individual user / program) and readonly. It serves as an example of precisely what was being asked for.

not "precisely"

I think you'll find that Scott Johnson was talking about two kinds of references to a common object where "object" is talking about something like an entity within the memory model of a single process. The question is about how to have these two kinds of references at the language level. Shap is talking about (unspecified; perhaps typing related?) difficulties trying to do this reasonably in BitC.

It's a stretch to begin with to try to say that there are even "references" in that same sense in SQL or to assume too much of a simple relation between database transactions and, say, the memory model in BitC. Moreover you've wandered in your point from saying something about "views" in SQL to now saying something about "clients" and their differing permissions.

But, perhaps I misunderstand you. Everyone can make mistakes and it isn't always easy to recognize or acknowledge one's own, no?

We wrestled with this distinction between an immutable object and a read-only view in BitC and stumbled badly. If you know of a language that has managed to pull it off, I'd really appreciate a pointer to any paper or writeup that may have emerged.

Is there an SQL paper or writeup you'd recommend in this context?

D2 has const and immutable

They are both transitive. All objects reachable through a const reference are read only.
Type Constructors
const(FAQ)

Modifying an immutable is considered undefined behavior.

Template code has const and immutable attributes inferred allowing for single implementations of functions that work on mutable, cost, and immutable data.

log of stumbling?

what went wrong? details would be nice to hear, to learn about this garden-path-paved-with-good-intentions, and to see if it sparks any ideas in other folks here.

I assume you're correct

That I misremember seems likely then. If I run across my tattered 1983 edition in a box soon, I'll follow up, but I may no longer have it. Perhaps the addition of keyword void from C++ at that time hogged my attention at the expense of more obvious const. Edit: My online searches reveal nothing about the history of the const keyword, but it gets mentioned in the same breath as other C89 changes along with void. Now I'm starting to recall an idea const was already in use, but in fewer places, and had new uses added to match C++ in some ways.

they vary until bound

So variables in math are not variable, but constant?

(Not an expert.) Variables are names bound to different things depending on context. In math the distinction is usually between free and bound variables (wikipedia: Free_variables_and_bound_variables).

Imperative languages bind to a storage location that's mutable unless declared immutable (e.g. using const in C). Functional languages aim to bind to the value inside, so storage location is an implementation detail, presumed not to matter as long as never mutated when still reachable from a past binding.

Variable vs Constant

Thanks for all the comments. The political commentary continues.

For those who argue against my choice of phrasing, let's look at the following definitions:

variable
adjective
1. apt or liable to vary or change; changeable: variable weather; variable moods.
2. capable of being varied or changed; alterable: a variable time limit for completion of a book.
3. inconstant; fickle: a variable lover.
4. having much variation or diversity.
5. Biology. deviating from the usual type, as a species or a specific character.
6. Astronomy . (of a star) changing in brightness.
7. Meteorology . (of wind) tending to change in direction.
8. Mathematics . having the nature or characteristics of a variable.

noun
9. something that may or does vary; a variable feature or factor.
10. Mathematics, Computers.
a. a quantity or function that may assume any given value or set of values.
b. a symbol that represents this.
11. Logic. (in the functional calculus) a symbol for an unspecified member of a class of things or statements. Compare bound variable, free variable.
12. Astronomy, variable star.
13. Meteorology.
a. a shifting wind, especially as distinguished from a trade wind.
b. variables, doldrums ( def 2a ).

constant
adjective
1. not changing or varying; uniform; regular; invariable: All conditions during the three experiments were constant.
2. continuing without pause or letup; unceasing: constant noise.
3. regularly recurrent; continual; persistent: He found it impossible to work with constant interruption.
4. faithful; unswerving in love, devotion, etc.: a constant lover.
5. steadfast; firm in mind or purpose; resolute.
6. Obsolete . certain; confident.

noun
7. something that does not or cannot change or vary.
8. Physics. a number expressing a property, quantity, or relation that remains unchanged under specified conditions.
9. Mathematics . a quantity assumed to be unchanged throughout a given discussion.

So irrespective of one's viewpoint, variable has the sense of changeability and constant has the sense of fixity.

When language designers then conflate the two concepts as if one is a special case of the other then one must question what they are saying and what they are thinking. Over the decades, I have seen quite intelligent (if not brilliant) people creating new languages and still messing up such concepts.

As I sit here, I am reminded of two things. The first is KISS and the second is BBB from a science fiction story by Eric Frank Russell called "Next of Kin". Too often, I find the papers written on language design, etc, ignore the first and follow the second. There are a number of authors who manage to do the first and avoid the second. Papers of this quality are a pleasure to read and even if the concepts are difficult, well worth the effort.

I have a grandson who is not yet 5 and has his own little electrical workbench, loves electric motors and in his own simple way can explain what is going on with them. His little sister (just turned 3) has quite a grasp on technical matters as well. The point - if you say something that is not quite logical, they pick you up and say you are being silly. Getting back to my original point, the fact that such documentation was let loose on the word from an organisation that should have the resources to vet such things is a fairly substantial mess-up and other things are in play that do not bode well.

Andreas, getting back to your comment, I am more than happy to brush off the entire language, since by your own usage of variable still indicates changeability not fixity. If the terminology has got to the point of conflating what are conceptually different things, variables and constants, as one being a subset of the other then no wonder why much of the industry is right royally messed up. In that regard, I'm glad I am now able to program and study at my pleasure and not have to deal with it as my working industry.

Hence, constants always represent the same thing, whereas variables are used to represent something that can change (irrespective of whether it is just a place holder or represents a mutable entity within the relevant context).

Since functions can be

Since functions can be called multiple times, there is variability in how their arguments are bound on each call. I'm sure the mathematical notion of variability is related to something like that.

I don't know. I think

I don't know. I think informal mathematical language supports Bruce's position. For example, I think it would be odd to hear a mathematician refer to pi as a variable. It is a constant, not a variable. The problem with Bruce's preferred terminology is that he'd have to introduce a new name for 'symbol representing a value', which is really the more fundamental concept and what we take 'variable' to mean, or forever repeat 'variable or constant'.

The point is that pi's value

The point is that pi's value is fixed in our reality, while x, y, and t are not. In f(x), x is variable, and we often graph f(x) against a range of values for x.

Variability simply means what symbols have fixed values (pi) and what do not. So take an arbitrary ID in your code, and if you can ascribe one interpretation to that ID (say PI, a non-variable type, or a direct procedure name with one implementation), it is a constant...otherwise it is technically variable.

I have no problem with

I have no problem with symbols representing different things. What I have a problem with is conflating conceptually different things as being the same kind of things, in this case variables and constants.

If a concept is "orthogonal" to another concept then let's keep them separate. If there is a cross over, let's try to find what are the real differences and deal with those. As in above, the concept of associating a symbol with something is separate from the something that we are dealing with. I hope that is clear, it's late and I'm off to the land of nod.

Beware the general case

Note that constant bindings generally are variable, too:

function f(x) {
  const y = x + 1
  // ...
}

In what sense is y a "constant"? The problem with the distinction you are trying to make is that you are only considering a special case. In general, the differences are far less clear.

In any case, you are up against a century of CS tradition. The reason that constant bindings are not usually considered a separate thing is that they are simply a degenerate case of the general concept. And for the semantics aficionados, any bound identifier is just a short-hand for a lambda parameter anyway -- recall the name of this blog. ;)

Presumably y wouldn't be

Presumably y wouldn't be constant at all under Bruce's definition. In C++ lingo, I think he's talking about the constexpr specifier rather than the const specifier.

Here is where I would argue

Here is where I would argue that we have a situation where we have partially merged two concepts. In this case we have a variable y that we have added an additional facility of not having a means to update the value after assigning a value to it. Irrespective of the fact that the language has a keyword indicating no update and may be used used to define an association between a symbol and a constant as well as an association between a symbol and a "variable" that we give you no ability to later update.

There is conceptually different semantics involved but you are using the same syntactic form to represent both.

You make my point though in that the concept of binding a symbol to a value is separate to whether or not that value represents something that is permanently fixed (a constant) or can change (a variable).

Semantics are implied or explicit.

I agree it is confusing. In functional languages we have symbols that are one-time-bound to values (constant) and references (mutable). In C symbols can be bound to values, and rebound. In C the type of the symbol is statically determined and in functional and logic languages the type is dynamic. The semantics of variables is obviously dependent on the language, so should either be obvious from the context or stated specifically.

It seems a similar situation to types, where the arrow in a function type has different semantics depending on the language, pure in Haskell, with side effects in ML.

So for a new language where we can't imply the semantics yet, they need to be explicitly stated.

Exactly, but...

You make my point though in that the concept of binding a symbol to a value is separate to whether or not that value represents something that is permanently fixed (a constant) or can change (a variable).

Yes, exactly, and in PL theory circles this distinction is standard. However, a name introduced by a (any) binding is consistently called a variable (and being subject to standard meta-operations like substitution), and an object that is mutable is called a reference cell, or assignable, or other names. Variables can be bound to reference objects, but don't have to.

As has already been hinted at by others in this thread, some (mostly modern) programming languages also make this distinction, notably the ones in the ML tradition. There, bindings just name values, while reference values are their own type. In fact, Algol 68 already made this distinction.

Agreed

The nice treatment of variables in ML simplifies both programming and reasoning about programs. François Pottier remarked that the mutable variables used in the common presentations of (concurrent) separation logic are an extra source of complexity that goes away if we go back to the ML tradition. We should kill the mutable variables.

(In practice, adoption-oriented language such as Swift or Scala rather go in the opposite direction of having a class of mutable variables, the only difference with a restricted use of references being that there is no explicit mark on variable access. Interestingly, the Swift syntax to access option types, foo!, is very close to the ML notation to access a reference, !foo. In a few generations, programmers may come to think that being explicit about mutable access is a decent default choice.)

Implicit mutability useful

These transformations from mutable variable use to immutable variable can lose information. Say I have a mutable variable X and use SSA to transform it into XA and XB. If I run a dependency tracker, I lose the fact that XA and XB were the same X before, and am now tracking separate XA and XB variables! That I was reading and writing just X is semantically useful information, and we've lost that by using immutable variables. There is more to life than basic code execution!

There are cases where a mutable local variable is useful meaningful, then there is just haphazard cases of using variable v for a multitude of unrelated things, but I don't think the latter is very common.

There is more to life than basic code execution!

No there isn't.

Right, I get that functional

Right, I get that functional programmers hate debuggers.

Who? What? Where?

I am not a functional programmer. Just making a joke. People who get that there's more to life than code wouldn't frequent LtU a lot, I imagine.

Fair enough. I should have

Fair enough. I should have said "we do more with code than just basic execution" or something like that. Functional PL is supposed to make life easier for everyone, but often makes things much much harder.

code transform

Whenever we transform code (e.g. compilation, inlining, optimization), we can lose useful information (e.g. types, location, comments). But that's just a concern for transformations in general. I don't see how an argument against transformation serves as an argument in favor of implicit mutability.

As far as debugging goes, even in imperative languages I find it much easier to debug if I write code in an SSA style. Why would I ever need mutable X when I can have XA and XB for different steps A and B?

It helps when you are trying

It helps when you are trying to reason about how X changes. Without an explicit notion of X, then you have to do that processing all in your head.

reasoning about change

There are, of course, many cases where reasoning about state and change is essential. But your specific example - i.e. where an SSA transform from mutable X to immutable XA,XB is feasible - does not seem to be such a case. Rather, trying to reason about how X changes is accidental complexity due to unnecessarily expressing the program in terms of mutation.

If I can avoid reasoning about change, that's easier on my head. If mutability is explicit, I can avoid reasoning about change except for a few explicit elements.

Say you are writing a parser

Say you are writing a parser by hand (it happens a lot in industry). You have a cursor variable for a parse tree method, its an argument and an output. Cursor itself is an immutable value, allowing us to use it safely to identify landmarks (e.g. other parse trees). Whenever a tree is parsed recursively, it takes cursor and returns a new one.

Now, let's just assume no loops to make it simple. Would you prefer to manually convert cursor so that we have cursor0, cursor1, cursor2...cursorN for however complex our parse method is? The code not only becomes very obfuscated, but is difficult to refactor and debug. Oh, and what if I accidentally use cursor4 when I really should have used cursor5? Oops, that's a bug.

My old parser actually "returned" new cursors rather than using C#'s ref keyword to just update it in place, but this was very error prone, because I would forget to reassign the new value and what not. Perhaps linear types would help prevent errors, but barring that...ya, FP is quite difficult without using at least some kind of monadic style.

Use GUIDs

Use cursor_aa7fa526_b02a_4253_b1e1_cb2e670a4b51, cursord_2357b02_61f1_45b6_b6d5_5c75025632b2, etc. instead of cursor1, cursor2, etc. That way when you add a new cursor in between two existing cursors, you only have to change references to one cursor.

Uhm...no :)

Uhm...no :)

re GUID identifiers and Uhm ...no :)

Well, with a suitably sophisticated IDE this should be no problem.

(This is a political topic post and sarcasm is allowed, right?)

re: cursor example

If we have a lot of repetitive code that we're likely to get wrong, we should try to abstract the repetitive elements. In this case, we might abstract glue code by using a parser combinator or state monad.

This example does not seem to involve any "reasoning about change". The main proposed benefit seems to be removing names or values from scope, i.e. such that you cannot use `cursor4` twice (by accident). It seems to me there are many other ways to approach that particular concern.

I'm not particularly fond of named local variables when I can avoid them. But if I did use them, I'd probably use names like `cursorAfterPhoneNumber` instead of `cursor4`. Among other things, such names are much more likely to be stable across changes in schema, and also easy to debug, refine, and refactor (no pressure to renumber or rename).

If we have a lot of

If we have a lot of repetitive code that we're likely to get wrong, we should try to abstract the repetitive elements.

Should we? Although I'm inclined to think powerful abstraction facilities are desirable, this is because I don't want the programming language to limit my choice of abstraction — not because I "never met an abstraction I didn't like". On the contrary, I've met quite a lot of abstractions I didn't like, and often felt it was the best that could be done in that programming language but the language should have been able to do better.

Verily, the elementary case in favor of arbitrary abstractions is to reduce the chance of making mistakes when doing the same thing again and again... but surely there are drawbacks as well. How about greater difficulty of understanding, together with greater difficulty of dealing with things that don't quite fit the pattern? Which amounts to, harder to use and less flexible. Presumably it'd be easier to mitigate these problems in a language with more versatile facilities for building abstractions (i.e., "more abstractively powerful"); but it doesn't follow that all the things thus facilititated are specifically desirable.

awkward abstractions

As you mention, languages often limit abstraction. But awkward abstractions in languages often become DSLs, scripts, or frameworks. I believe that a balancing point for repetition between 'should abstract' and 'shouldn't bother' can vary a fair amount based on the language and context, and how easy, natural, and smoothly integrated the necessary abstraction would be.

We would do well to develop languages or paradigms more effective at abstracting & integrating frameworks.

Regarding 'understanding' - one must be careful to weigh the overheads for comprehending a new abstraction vs. the aggregate overheads and semantic noise from repeating a poorly abstracted structure and trying to see the important differences. Either can contribute to misunderstanding. I believe this rolls into the aforementioned balancing act.

Why?

Why do this, what is wrong with symbol rebinding? References are a problem due to potential side-effects, but let the compiler do the loop unrolling. I would also just plain return the cursors.

The cursor object can be immutable, but you can allow rebinding, so that you cannot change any properties on the cursor, but you can swap cursors.

f(cursor x) {
   do {
       result = parse(x)
       x = result.x
   } while (result.success)
}

With move semantics returning local objects from functions is very efficient, avoids new/delete avoids references etc.

re symbol rebinding

Rebinding is nicer than mutation in contexts involving lexical closures, side effects, parallelism, or lazy evaluation. But it can still be relatively difficult to reason about, e.g. in cases involving conditional rebinding within a loop.

To the extent we use it at all, I think there is value in favoring tail calls or loop variables (e.g. `foreach foo in fooList`) to express rebinding in a structured manner. Equational reasoning is easier with a clear lexical scope for each binding.

Foreach and Algorithms

Yes, 'foreach' is a nice way to do rebinding, and its clearly structural , and will terminate with data (but not co-data).

If reasoning about code is easier, why are algorithms written using symbol rebinding? If I pick up any book on algorithms, for example Tarjan's little book on network algorithms, they are all expressed in a style using value semantics and symbol rebinding?

network effects

If reasoning about code is easier, why are algorithms written using symbol rebinding?

Context and history shape our actions. Algorithm books are written in context of PLs and tools familiar to authors and their audiences. People can push an alternative notation, but it is difficult to teach or learn two new things (both algorithm and notation) at the same time. Authors and audiences are understandably reluctant to try.

Most questions of the form "If foo is better, why isn't it more popular?" - in ANY domain - are remarkably naive to network effects, path dependence, and various social forces. Quality is a minor factor in success.

Natural Understanding

Is it possible that it easier to reason and construct algorithms using symbol rebinding? It certainly leads to more compact notation, and it may be easier to intuitively grasp the meaning. One should consider that the simpler and more natural notation is the one humans first thought of, and as humans are the primary consumer of these things, the more natural notation is somehow easier? The first algorithms were written before PLs were even (offically) thought of.

I like Stepanov's view on this, it is mathematics that has to improve to cope with memory, not programming that has to give it up (apologies for the bad paraphrasing).

natural misunderstanding

To clarify: I don't consider all forms of 'thinking' to be 'reasoning'. While some forms of thinking may seem easier or more 'natural', many are biased, fallacious, imprecise, inaccurate, or otherwise ineffective. Mathematics and logics provide a basis for effective thinking, i.e. 'reasoning'. And equational reasoning is among the easier approaches of reasoning for humans to grasp.

Programming today is consistently buggy and overbudget. Even algorithms found in books are often flawed. I disagree with your opinion that it's mathematics, not programming, that should change. But I would agree with a weaker statement: that it would be nice if mathematics can find effective expression using metaphors we humans find more natural or intuitive. (e.g. I'm experimenting with leveraging models of hands and zippers to model a game-like programming environment, related to math in terms of category theory. There seems to be a useful similarity between editing gestures and groupoids.)

Whose Opinion?

It wasn't my opinion, but that of Alex Stepanov, who is a mathematician and a programmer and so is probably qualified to make such a comment. I tend to agree with him.

Knuth is written using assignments too, and he also is a mathematician.

It would seem the mathematicians have less problems with rebinding symbols than the computer people.

Knuth latest remark

Was something about that all geeks know that at some point you'll need to add one to a counter.

I take that as that he assumed a position in the latest debates about where to go with PLs.

Somewhere, I thought it was also due to a debate sprung from the latest remarks of Barbara Liskov on PLs. But I guess that's connecting dots which aren't there.

Absolutely not. People

Absolutely not. People actually do think in terms of time and change, and symbol rebinding is just an extension of that. To get people to think timelessly (i.e. more mathemetically) is actually quite difficult, and not as natural to us.

So ya, if 100 thousand years of evolution vs. 2000 years of math should be considered a "network effect" then maybe you are right.

People think in terms of

People think in terms of time and change, I agree. But they aren't very good at it, not even at the small scale of estimating where two trains will pass each other or dodging a punch, much less the medium scale of a block puzzle or larger scales. I think one could present a good argument that humans are better adapted to static and timeless thinking.

Beyond that, treating symbol rebinding as sufficiently analogous to the motions and changes people grok seems a stretch.

Besides, those 2000 years of math have seen the greatest advances in human technology. This suggests that reasoning timelessly is much easier for us. (Again, reasoning isn't merely thinking or reacting or intuiting. It's thinking effectively - cogent forms of thought and argument.) As I mention above, I distinguish thinking from reasoning. Easier thinking is only useful insofar as it's effective. Those 100 thousand years of prior history aren't exactly a monument to effectiveness of natural human thinking.

Humans aren't adapted to

Humans aren't adapted to static/timeless thinking at all. That we are capable of it through lots of training is more due to our flexibility (an accident that was not subject to natural selection), not some innate ability that was useful for hunting mastodons. That programming is the way it is today is really no historical accident at all.

Symbol rebinding is "change" to the person even if it isn't really change to the computer (via SSA transformation). Take away this ability and lots of code becomes harder to write, or has to be completely re-architected to become elegant again (e.g. using parser combinators rather than recursive descent).

Hunting mastodons and theorems

I don't think we known nearly enough, yet, about the strengths of stateful thinking, to be able to make an informed judgement on its merits and demerits. Indeed, what we know about the merits and demerits of different ways of thinking may be more random than what we've evolved to be good at; we may have evolved to think a certain way because that aproach has some advantages that our meta-reasoning, still in its infancy, hasn't caught up with. My intuition tells me there are deeper things to say in favor of stateful thinking than what we grok atm; and until we know what those things are, we won't be able to make an informed judgement, one way or another. Stateful thinking is apparently good for lots of things we needed to do during our evolution; stateless thinking is apparently good for applying our current formal tools to various kinds of reasoning. The latter may be more coincidental than the former (or not; but I don't think we're ready to make that call).

loop invariant

Humans aren't adapted to static/timeless thinking at all.

I find this assertion utterly ludicrous. The vast majority of human experience and thinking involves places and things that don't move about on us, or conditions and behavior that are predictable primarily in their similarity from day to day or year to year.

It is my observation that most human thinking favors static systems, and further that humans are relatively uncomfortable with significant changes of any sort. But I'll grant that some forms of 'timeless' thinking are more of the 'loop invariant' nature - e.g. day/night, work schedules, habits, traditions, seasons, holidays and weekends - and thus may not seem timeless unless you step back a little.

And other forms of timeless thinking do involve moving parts. E.g. we like to think of roads and plumbing in 'timeless' ways, even though they serve as platforms for flowing elements. When this breaks down - e.g. with traffic jams - we have a difficult time understanding why... we're better at understanding the static parts and stable behaviors than the dynamic ones.

not some innate ability that was useful for hunting mastodons

Hunting effectively also takes a lot of training or practice. It isn't some natural ability we're born into. And hitting a moving target with a thrown spear involves only a few 'moving' things - hardly analogous to the scale of a modern software application.

Take away this ability [to change] and lots of code becomes harder to write

I do not suggest taking away the ability to express programs in terms of change. But change does easily hinder reasoning. I would suggest making it easy to control, e.g. by avoiding it as an implicit or global feature, favoring determinism and repeatability, supporting confinement or isolation.

I find this assertion

I find this assertion utterly ludicrous. The vast majority of human experience and thinking involves places and things that don't move about on us, or conditions and behavior that are predictable primarily in their similarity from day to day or year to year.

That is not stateless however or even formal. We've been able to replicate this behavior in the lab relatively easily (ML), and it didn't involve time-free thinking on the computer's part. Rather it is our ability to abstract over multiple events that is key, rather than existing in static systems, which we don't.

Hunting effectively also takes a lot of training or practice. It isn't some natural ability we're born into.

Are you sure about that? Obviously, most of us don't do it anymore, but back when we had to, it didn't require "a lot of training or practice."

favoring determinism and repeatability, supporting confinement or isolation.

Of course, but we can do that without taking away convenient local variable reassignment.

Regardless of evolving in a

Regardless of evolving in a stateful system, we humans tend to think about things as stable or static. We have difficulty reasoning about more than one thing changing at a time, much less emergent behavior.

I've not claimed we exist in static systems or that we are unable to abstract over events or motion. Rather, I've claimed we aren't very effective in our thinking about dynamic systems, that we are much more effective in thinking about stable and invariant behaviors.

This is perhaps because, in a static system, there is simply less to think about. But it seems ridiculous to deny that most things in our experience really are stable, and we think about even changing things (like human faces) as stable, and that this is often good enough.

As a trivial example of how even dynamic systems are both mostly static and framed statically by humans: your toothbrush doesn't move on its own, and it tends to have a stable shape from day to day by which you can recognize it, and most likely you put it in the same approximate place every time you're done with it, and you'd be upset if other people moved it on you.

Are you sure about that? Obviously, most of us don't do it anymore, but back when we had to, it didn't require "a lot of training or practice."

Training and practice are a big part of the cultures where hunting was important. But yes, I am sure we require training and practice to hunt effectively. Study some historical cultures, esp. regarding how children are raised, if you don't believe me.

we can do that without taking away convenient local variable reassignment

Depends on what you mean by 'convenient' and 'local'. In any case, such constraints only mitigate; it's still more difficult to reason about a program when symbols have different values or behaviors from step to step.

It isn't as though I'm completely against expressing programs with change. Stack shuffling functions, even in a purely functional concatenative language, have similar issues for reasoning vs. convenience or intuition (which is not the same as reasoning).

Referential Transparency

Isn't referential transparency and call-by-value much more important for reasoning about programs than symbol-rebinding? In other words we can allow symbol rebinding and keep the other two, isn't that a good compromise?

referential transparency

Referential transparency assumes a referential context - e.g. a syntactic scope in which a fragment of code that references its environment has a stable meaning. This context allows us to reorganize expressions, insert or remove statements, and otherwise manipulate subprograms without altering their meaning. There is a tradeoff between how flexible is your symbol rebinding vs. how difficult it is to reason about this referential context and thus leverage referential transparency. As I mentioned earlier, rebinding in structured ways is more amenable to equational reasoning - of which referential transparency is an example.

Call-by-value seems like a separate issue entirely. I suppose it might be important for reasoning about certain side-effects, divergence, and exceptions models. In the last few years, however, I've been favoring language designs where CBV is just an implementation/performance concern.

Ease of Reasoning

My interpretation of Stepanov's comments is that its okay to make reasoning about code harder, if it makes coding easier. We spend a lot more time coding that we do reasoning about it, so that would be the correct optimisation.

Hen and egg?

Maybe people reason so little because languages are optimised for making it hard?

We spend a lot more time

We spend a lot more time coding that we do reasoning about it

Do we really?

The average number of lines of code manipulated per programmer day is quite low, albeit heavily front-loaded with a long tail for maintenance and enhancement. I've seen estimates in the 10 LoC to 25 LoC range, though I'd be more interested in some real empirical stats from github or sourceforge. Assuming an average 4 hour workday in front of code (the rest spent on meetings, e-mail, travel, powerpoint, etc.) that's perhaps a line of code every ten minutes.

Meanwhile, time spent reasoning about code broadly includes isolating bugs, figuring out where to add an extension, reviewing APIs and documentation, exploring ideas, determining how to achieve some goal without breaking the system.

In my own ideal, the labor barrier to actual coding would be as small as possible... i.e. such that new ideas are readily expressed in an unceremonious, parsimonious manner without any need for boiler-plate (such as imports, public static void main, initializing subsystems, etc..). Similar features are also useful for scripting.

Of course, some people have different ideals. E.g. in Why I Like Java, Mark Dominus describes Java:

Java is neither a good nor a bad language. It is a mediocre language, and there is no struggle. In Haskell or even in Perl you are always worrying about whether you are doing something in the cleanest and the best way. In Java, you can forget about doing it in the cleanest or the best way, because that is impossible. Whatever you do, however hard you try, the code will come out mediocre, verbose, redundant, and bloated, and the only thing you can do is relax and keep turning the crank until the necessary amount of code has come out of the spout. If it takes ten times as much code as it would to program in Haskell, that is all right, because the IDE will generate half of it for you, and you are still being paid to write the other half.

I wonder if Java developers might agree with your broad characterization that "we spend a lot more time coding".

Sounds Awful

That is completely the opposite to what I intended. Algorithms should be published, shared, and refined. They are never done, they can always be improved.

As I pointed out the mathematicians who actually do a lot of reasoning about code, like Knuth, seem to have no problem with rebinding symbols.

Stepanov identifies the key property for reasoning about code as regularity. This even allows mutable data and arguments to be passed by reference. The key property is that a regular-proceedure returns the same result if called with the same values (or objects passed by reference in the same state). A regular-datatype has value semantics when assigned etc.

the mathematicians who

the mathematicians who actually do a lot of reasoning about code, like Knuth, seem to have no problem with rebinding symbols

To point out that Knuth uses rebinding of symbols (which seems rather necessary for his target audience) does not imply it causes him no extra challenge with reasoning. Knuth tends to keep algorithms small and isolated, which helps resist and mitigate any resulting aggregation of complexity.

Algorithms should be published, shared, and refined. They are never done, they can always be improved.

Refinements require thinking about code. Reusing code means less time writing it. Thus, publishing, sharing, and refining algorithms, e.g. expressed as code, tends to favor the "thinking" side of the thinking-vs-writing spectrum. If you think the opposite, I'd be interested in your reasoning.

Stepanov identifies the key property for reasoning about code as regularity.

Stepanov should try debugging Cow or Whitespace or perhaps a program expressed as a cellular automaton. He'll soon reject his hypothesis that 'regularity' is the key concern for reasoning about code. ;)

Unlambda

Okay, I walked into that one. Clearly regularity is not the only property, but it is an important one. That's clearly my mistake not Stepanov's. However I take your CoW and raise you an Unlambda.

Most reasoning I can see being done about programs happens with their types, and as long as symbols keep the same types, changing their values does not affect that.

Clearly dynamic languages like Python and JavaScript are moving further from this, yet people seem to find them easier to use than statically typed languages.

Ten years ago I would have agreed completely that functional programming with immutable variables was the way forward. The problem is it does not make things easier. Trying to implement graph algorithms in functional languages is not easy, and our languages combine ad-hoc sets of features that make it difficult to determine which features contribute to making things better or worse.

My experiences lead me to believe that a quantified polymorphic type system with something like type classes is a good thing, and that attempts to add polymorphism in an ad-hoc way to languages results in a mess (C++ templates, Ada generics, Java generics). But I have also found that squeezing everything through a topological-sieve to keep everything immutable makes some very simple and elegant algorithms hard to understand and harder to debug. There are many papers on implementing efficient union-find in functional languages and none of them are really satisfactory. I would rather take Tarjan's definition straight from the book, have access to all his reasoning, and use that. The code is smaller, more readable easier to maintain, and I find it easier to reason about than all the work on persistent and semi-persistent data structures. I find that compactness and simplicity are more important properties for me personally to reason about code than whether symbols get rebound. I find "x = x + 1" to be pretty easy to think about. Stepanov also says that he thinks the theory of iterators is as fundamental and important to programming as rings are to number-theory.

The efficient functional union-find using semi-persistent data structures actually cheats and uses an internally mutable data structure which is carefully constructed so it looks read-only from the outside.

My personal view at the moment is that I need a language with a low-level imperative sub-language, and a declarative (immutable) high level language for abstractions.

lots of comments

Assuming 'regularity' refers to something like 'avoids special corner-cases' then I agree it contributes to reasoning.

Types are useful for reasoning, but often fail to address reasoning about performance, concurrency, deadlock, divergence, partial failure, physical safety (e.g. in robotics), battery/power consumption, and many other properties. A more sophisticated type system might help, but today we need to reason about values in many ways.

Dynamic languages are easier to use because we're free to use them incorrectly. A few statically typed languages have taken notice, e.g. GHC's `-fdefer-type-errors` and the recent interest in gradual typing.

I grant that mutability is convenient for implementing graph algorithms, and I often use a State monad to express graph algorithms.

Regarding performance of union-find and other algorithms, I find it amusing that advocates of imperative languages claim to do better than O(sqrt(N)*lg(N)), which is the physical limit for random access to uniquely identified resources distributed in 2D space (or cube root for 3D space).

In practice, most complexity in programming is of the aggregate sort - lots of simple parts are combined into a rather complicated mess. While an example like `x=x+1` is small and trivial to reason about by itself, it contributes more than immutable variables to complexity in a larger context that uses x.

IMO, zippers and streams (depending on use case) are a fine replacement for conventional iterators, and are easy to use in a purely functional manner.

If you like the borderline between functional and imperative, give linear types a try. :)

Linear Types

Tarjan's union-find is O(m * inverse-ackerman(m, n)), which is better than O(m * lg*(n)), and is the best I am aware of. It seems difficult to get close to this with functional language yet the imperative code is just 4 or 5 lines for each function.

I don't see how any amount of streams and zippers can beat that elegant simplicity and efficiency :-)

So I am looking at something that allows small containerised imperative components to be plumbed together generically using a declarative logic language that integrates with the type system. The imperative parts may be monadically typed, i'm not sure yet, but the idea is type inference deals with all that, you just write a little imperative program and let the compiler worry about the types.

I am looking at linear types and linear logic at the moment, but I don't know if its what I wan't yet.

Tarjan's union-find is O(m *

Tarjan's union-find is O(m * inverse-ackerman(m, n)) [..] It seems difficult to get close to this with functional language

When big-O terms ignore speed of light, size of integer/pointer representation, cache or locality, and other physical concerns, imperative tends to do better than functional. Imperative analysis cheats, primarily based on a 'simplifying' assumption of O(1) array/pointer access.

If you do account for physical concerns, both FP and imperative will have the same asymptotic limits (though occasionally FP requires a more awkward construction, e.g. use of zippers to model locality). Coefficients matter, of course, and that's where FP tends to lag behind. Modern hardware does a decent (albeit imperfect) job of supporting the O(1) access illusion for imperative code up to the limits of RAM.

I don't see how any amount of streams and zippers can beat that elegant simplicity and efficiency :-)

I mentioned streams and zippers as alternatives to iterators, not union-find. In context, I'm not sure what you mean by "that" in the above quoted sentence.

So I am looking at something that allows small containerised imperative components to be plumbed together generically using a declarative logic language

Your vision isn't very clear to me... something like flow-based programming with wiring expressed in Prolog? In any case, your vision would make a good subject for your own blog articles.

That...

I mean the elegant simplicity, and high performance of:

element function find (element x);
   do p(p(x)) /= p(x) -> x := p(x) := p(p(x)) od;
   return p(x);
end find;

Hm

Isn't that implementation wrong in the sense that it only updates the ancestor pointer of the current node, instead of (as it should) having each node in the ancestry chain retargeted to the canonical ancestor?

(That would say a few things about the "simplicity" of packing a loop, a test and an assignment all in a non-idiomatic single line.)

A correct ML implementation would be (but I don't find it "simple"):

let rec find x =
  match x.up with
    | None -> x
    | Some above_x ->
      let top = find above_x in
      x.up <- Some top; top

My Mistake

I can't cut and paste correctly, and I don't think any language can help me there. I missed the middle " -> x := ". Presumably without the '->' it would have been a syntax error.

It would be interesting to compare the machine code generated from the ML version and a C version.

Hm

I still don't see how your code would be correct, but it is so "elegant and simple" that I am unable to tell for sure.

If you have an ancestry chain of the form

A -> B
B -> C
C -> D
D -> E

you want find(A) to return E with an updated relation

A -> E
B -> E
C -> E
D -> E

It is clear to me why the ML version has this behavior: any find(...) call returns E and assigns the "up" pointer to this return value. The only way I see how to reason about your code is to execute it, and it gives something like

step (x=A)
p(A) is B
p(p(A)) is C
p(A) <- C
x <- C
graph:
A -> C
B -> C -> D -> E

step (x=C)
p(C) is D
p(p(C)) is E
p(C) <- E
x <- E
graph:
A -> C
B -> C
C -> E
D -> E

step (x=E):
p(E) = p(p(E)), stop

Path Halving

Its called path halving, see Tarjan "Data Structures and Network Algorithms" page 29.

"When traversing a find path, make every other node on the path point to its grandparent". It has the same performance bound as the more common version, but without the nested loop or recursive call. If you repeated the analysis by running find a second time you would see the result you were looking for.

This is the more common version:

element function find (element x);
    if x /= p(x) -> p(x) = find(p(x)) fi;
    return p(x)
end find;

But I prefer the path halving algorithm. The ML version you give is exactly this, complete with mutable reference, so I don't see the difference here? I think perhaps we actually want the same thing, but are approaching from different sides, and in any case this algorithm is an example of mutable references rather than symbol rebinding, so doesn't really support my case.

Lets see if I can state my case more convincingly: If we say that a language needs support for both mutable references and immutable variables, but I still think there is a case for symbol rebinding what Stepanov calls value semantics. This would be equivalent to an ML reference that cannot have multiple reference to the same value, so it is mutable, but assignment creates a copy. This seems to be useful in that it is more restrictive than a reference, but more permissive than an immutable value.

Oh

So it's just a different algorithm. Problem solved, thanks.

Tarjan's union-find is not the best

Tarjan's union-find is O(m * inverse-ackerman(m, n)), which is better than O(m * lg*(n)), and is the best I am aware of.

Despite big-O advantages, Tarjan's is not the fastest in practice because of cache effects. In fact, a long neglected version published by Rem in 1976 turns out to stomp all over it's supposedly superior competition in both time and space.

For a simple persistent union-find, this one is just as simple as the imperative version, and performs comparably to an imperative implementation given their benchmarks. It's dependent on a fast persistent array, which the paper also describes.

interesting.

I'll have to give it a try, although in my application path-collapsing is faster as find dominates union. I agree that worst case time/space behaviour is not a very useful measure. I generally benchmark against a realistic data set for evaluation.

I don't see how performance of the persistent version can be as good as the destructive in-place algorithm. Anything involving memory allocation is going to be a lot slower doing the same amount of work. (Were they using some kind of memory allocation in the case of mutation, rather than destructive in-place update?).

I think the key is that they are looking at Prolog style systems with backtracking. For my Prolog implementation, I am just using a simple undo list, where we store the nodes to unlink. If we think about the restricted operations needed in the Prolog implementation , I think these two are the same. Effectively the persistent array operates on the current version and 'undoes' the changes on backtracking, so all it does is package the 'trail' and the disjoint set together. The persistent array used is actually a mutable object contained within an API that makes it look immutable. In the simple case where we don't want a history, this reduces to a simple mutable array. You could use this same persistent array structure in an imperative language to give value semantics, effectively optimising the copy-on-assignment. This is an interesting point, nothing prevents you using immutable data structures in an imperative language, so an imperative language is 'superior' in the mathematical sense to a functional language, in that you can implement any functional (immutable data) algorithm in an imperative language, but not necessarily the other way around.

Anything involving memory

Anything involving memory allocation is going to be a lot slower doing the same amount of work.

Not much. It's just a pointer bump after all. Their persistent array basically performs ~5 writes instead of 1 write (4 to initialize diff, 1 to write the actual array slot). However, the write cost to the array may not be in cache but the bump pointer is. Thus, the array cache miss dominates everything and the 4 writes are practically invisible for non-trivial arrays, on average. This is why naive complexity metrics can be so misleading!

This is an interesting point, nothing prevents you using immutable data structures in an imperative language, so an imperative language is 'superior' in the mathematical sense to a functional language

Yes, mutability increases expressiveness. That doesn't imply that mutability ought to be the default though, which is what you seem to be saying. Obviously pervasive mutation has its downsides, as would any pervasive sort of side-effect. That said, I think imperative languages are poorly served by the examples thus far, so I'm looking forward to seeing what you come up with!

Not using Rem's algorithm

I just noticed the persistent arrays based union find is not using Rem's algorithm, just the union-by-rank. Its also using path-compression rather than path-halving, I wonder if it would perform better with path halving? In any case there may be room for improvement.

I don't agree with your assessment of the memory allocation. Extending an array, is just a pointer bump, providing you have already reserved enough memory. Malloc is much slower as it has to follow the free block list to find a block that is just big enough. As we don't want to encourage fragmentation, most mallocs start from the smallest block and work up. Faster ones use hash-tables of block size, or exponentially sized block pools (so you never waste more memory than the size of the allocation), but not normally just a pointer bump. Yes with GC you may be able to bump the top of heap pointer, but you still pay the cost when the GC compacts the heap. It is interesting to consider that GC allows a lower call overhead for memory allocation initially, but recovers the 'cost' plus 'interest' asynchronously.

This could be another place it is losing out to potential imperative/mutable implementations as a C++ vector can be extended in place with which reduces the malloc cost from N allocations to log2(N) allocations as it doubles the reserved size of the vector every time it allocates. Obviously this kind of optimisation is only possible when you can explain to the language that you have an extendable-array of the same object. Although it may be possible to perform the same optimisation on inductive data-types, it will not always be beneficial, so there would need to be some way to indicate to the language the best optimisation strategy.

This is really my point, if you hide the memory, you will need to provide optimisation hints to the compiler so it knows the runtime use patterns of the memory. You cannot infer this statically at compile time, because ultimately that would involve solving the halting problem. My preference is for direct control, rather than indirect control through hints/pragmas.

As for what I will come up with - who knows, it could end up being ML with (the good parts of) JavaScript syntax for all I know...

Hm

Direct control is good if you have stopgaps to (statically) ensure that what you write is correct. So far none of the mainstream languages without a GC has been able to provide that safety -- except by using reference counting instead (C++), which has dubious performance implications.

I think efficiency is an important topic, but to me correctness is always rated first -- because it is what lets you be productive instead of wasting time fixing bugs. Of course for some specific purposes you have hard constraints, and that's a good reason to develop safe low-level languages. I think they should be presented as extensions of general-purpose GC-ed languages, rather than insular languages, so that we can not bother with memory for the 80% of code that don't need to.

GC implemented as library/module

I think GC should be implementable in the language, so that different libraries can use different optimised memory management. So generic GC would be in the standard library, and you pay the cost only if you want to use it. I think as much as possible should be expressable in the language itself, to remove as much magic hidden in the compiler as possible.

I agree

But I think the better solution is exposing compiler behavior to customization (or getting rid of the black box compiler) by the programmer rather than polluting the high level language semantics with (necessarily ad hoc) low level operational concerns.

Ad Hoc?

I am not sure I agree the semantics would be ad-hoc? You can start from a machine model like the random-access machine, and have a linear type system. Turing introduced memory to mathematics to enable new classes of problems to be solved - there is nothing ad-hoc about the semantics of memory. If you consider the semantics of memory ad-hoc, then the whole tradition of mathematics is ad-hoc. You certainly could not write a garbage collector without memory, and I am sure there are many algorithms that cannot be implemented without memory.

I think getting rid of as much of the black-box of the compiler as possible is a good idea, although I think the idea of customising the compiler sounds more complex than simply exposing enough to the programmer to let them write efficient code in the first place.

How are you going to

How are you going to implement garbage collection without modeling fixed width pointers that don't fit into the random-access machine model?

what's the problem?

The Schonhage model has one unbounded base address register, everything else can be bounded. The heap base address can be loaded into the address register at program start. Now we can operate only using fixed width (offset) addresses. You might fix the base address at zero and just ignore it too - the second method seems preferable to me at the moment.

Everything else can be bounded

If you work with a fixed size heap, then I don't see how you can faithfully encode algorithms that deal with objects of unbounded size. Any translation will always introduce a new possibility: out of memory. If you abstract away from real memory and give yourself an unbounded heap, then I don't see how you're going to able to assume fixed width pointers.

Out of Memory

I don't see how this is any worse than current garbage collectors, and out-of-memory is a possibility in all functional languages (and they don't handle it very well). For example I have had Haskell crash my laptop rather than just exiting with out-of-memory, and ML could quit at any point do to memory exhaustion (the side effects are 'hidden' in the function 'arrow' type).

Right

Current languages maintain a distinction between the platonic ideal, exposed in the language semantics, and the compiled implementation which can run out of memory. If you go to Random Access machine, you get to continue pretending you have infinite memory in your high level semantics but you don't get to work with fixed with pointers and therefore might miss out on some implementation tricks. If you move to fix width pointers, you lose out on the clean high level semantics.

My thinking is that we want to keep the explicit representation selection step that currently happens in compilers, but clean it up and expose it in a nice way as part of the programming process. i.e. we start with clean high level semantics that don't know about memory at all and infer a representation at a lower level. Or maybe we infer several lower-level representations (CPU, GPU, network distributed, etc.), and get to use different garbage collection policies in each setting (since no single policy will probably make sense for all of those settings).

Random Access Machine

As far as I understand it you can work with fixed width pointers in the random access machine. You are allowed to bound all the registers except the address register. However if the address register is pre-loaded with zero, and never allowed to change, you are effectively limited to addressing the range of the bounded registers which can be used for offset addressing. So if we have 1 address + N 64bit integer registers, we get 64 bit addressing, and 64bit data and are still within the formal model of the random-access-machine.

Yes, but if your e.g. linked

Yes, but if your e.g. linked list implementation only uses 64bit registers, then it isn't a faithful linked list implementation. If your language supports formal proof, then this is a problem because you'd like to establish that your linked list implementation works and it doesn't.

I don't see the problem.

I don't see a problem with doing the proof in the additive-group of integers mod 2^n. In general if you assume the list 'link' operation takes two nodes and links them, you can give the proof assuming all the nodes are already allocated, and no memory allocation occurs during link or unlink operations. List operations on lists obviously don't need to allocate memory, so the only issue is with insert which might have to create a new node pointing to the data.

Then you implement memory transactions, so that you pre-allocate all memory required for the link operation (matching the requirements of the proof) in the begin-transaction.

transaction
    list reserve x
begin
    foreach y in x 
        list insert y
    end for
fail
    :
    :
    :
end

Maybe functions could be used as transactions to save creating more syntax? Then not having enough memory would just be an exception, and you wouldn't need pre-allocation, you would still need a method of rolling-back incomplete operations however.

The problem is that 64 bits

The problem is that 64 bits is not large enough to address an arbitrary set of nodes. If you assume that it is, you can prove false.

Pre-allocate memory

If we assume all nodes already exist in memory, so that values are singleton list nodes, then we never have to do any memory allocation in any of the list primitives. By simple case analysis we can prove the handling of lists is complete, and hence there is no problem. Effectively we prove all the list operations under the assumption that all the values and lists to be operated on already fit in memory. I don't see any problems with this proof sketch, do you?

My point is just that it is

My point is just that it is inconsistent to assume that all pointers are 64 bits when there are more than 2^64 distinct nodes. The problem with inconsistency is not that you can't prove the true thing you want but rather that you can prove the false things you don't want.

What can you prove false?

What can you prove false? How is it problematic to limit the number of nodes? If a linked list consists of a 64bit pointer, and a 64bit integer value for example, then you can have 2^63 nodes in memory max. So you have N nodes where N is in the group mod 2^63.

If you assume that 1) all

If you assume that 1) all pointers are 64 bits and 2) that you have as many unique nodes as you need for any purpose already allocated, then you can reach contradiction (prove false) formally by the pigeon hole principal. This is a shallow observation and I have no idea what you mean by "you have N nodes where N is the group mod 2^63."

Pigeon Holes mod 2^63

The pigeon hole principal states that if you have N items and M containers and N > M then there must be more than one item in some containers.

Perhaps the bit you are missing is you state "you have as many unique nodes as you need for any purpose" and I state "you have N nodes where N is a member of the finite group mod 2^63". What I am saying is that N (the number of items) is not a natural number or an integer, but a member of something like the discrete-archimedian-ring of numbers mod 2^63. So we have limited N to be a member of a finite group, and just like in number-theory, we can have proofs that are about numbers and functions in that finite group. We prove by induction, so we can say for (M < N) < 2^63 we can prove our list operations correct, without having to test every possibility directly, given N < 2^63 nodes are 2 addresses (I am assuming 64-bit word addressing, but you can adjust for byte addressing with N < 2^60), and 64 bit pointers.

In fact I suspect we could prove our list operations correct for (M < N) < P, and then by induction on P prove that the list operations are correct for all memory sizes. So we can prove that if all the nodes fit in memory, the operations will be correct.
______________________________

back to the start

Re: "we can prove that if all the nodes fit in memory, the operations will be correct"

This seems functionally equivalent to an infinite memory assumption.

Re: reasoning with fixed memory

Now your program has two lists and a map. How will your proposed proofs compose, if each of them have been proven correct assuming the full address space?

Assuming sufficient memory

I think you are trying to prove something else, like its a closed group - which it isn't so you obviously cannot prove that. You can prove each is correct with sufficient memory, so now all you need to do is check the assumption before you run the computation. Its just a proof under an assumption, but not infinite memory, instead finite memory P.

Assuming sufficient memory

Assuming sufficient memory to complete the operation seems functionally equivalent to assuming infinite memory. I cannot think of any example program that would distinguish 'sufficient memory' from 'infinite memory' and halt.

It is unlikely you can always predict memory usage ahead of time. For a simple example, perhaps your operation involves adding a list of strings to a trie. The number of nodes added to the trie per insert is a function of string length. The number of strings added to the trie is a function of list length. How would you go about checking this before running your computation? For a more sophisticated example, what if your operation was interpreting a script?

If you use lots of smaller checks, how is that an improvement?

Upper Bounds

I guess. There are two separate problems: Is the operation correct for all possible inputs; will this operation fit in memory. For the second it is sufficient to show the upper bounds for memory usage will fit. If whilst proving statically correct you can statically calculate the upper bounds memory use (dependent on dynamic argument data like string length and trie rank for example). Then you can know in advance if there is sufficient memory to complete the operation.

There is a simple case for in-place destructive update, where no additional memory is needed, that can therefore be proved correct with a static upper bound of zero additional memory. Thus tells you quite a lot more than assuming infinite memory.
________________________________________

For arbitrary values of 64?

When you write 'for all memory sizes', are you talking about making the pointer width a variable? You can certainly prove results of the form 'if all nodes fit in memory, then ...' everywhere, but the whole point of my objection is that this is annoying and an unwanted complication.

I still don't understand the significance of modular arithmetic to any of this.

Modular Arithmetic

Because you have to deal with pointer overflow. Add two pointers greater than half the address-space and you get an overflow. Computer 'ints' do not behave like mathematical integers or natural numbers, they behave like mod 2^n groups (rings).

out-of-memory is a

out-of-memory is a possibility in all functional languages

Not all of them. Funloft and SAFL would serve as counter-examples. :)

In any case, out-of-memory isn't any more elegant in non-functional languages. The last attempted allocation gets blamed, even if it's tiny or the real problem lies somewhere else. Most of the time, there isn't any sensible action but to fail.

Turing introduced memory to

Turing introduced memory to mathematics to enable new classes of problems to be solved

And Church demonstrated it wasn't necessary.

If you consider the semantics of memory ad-hoc, then the whole tradition of mathematics is ad-hoc.

Mathematics and computations do require space. They don't require mutation. How you use 'memory' seems to conflate the two concepts.

You certainly could not write a garbage collector without memory

You can model a garbage collector that does not use mutation, i.e. that instead computes the 'next' state of a machine. Doing so may even be useful. Of course, you would need a garbage collector to run the resulting machine in bounded space. There is some bootstrapping involved.

I am sure there are many algorithms that cannot be implemented without memory.

This seems like a trivial claim. As I understand it, algorithms essentially ARE the implementation. Bubblesort is an algorithm, an implementation of a sort function, and assumes a mutable array. QED.

Yet, there are other algorithms that implement sort functions, and not all of them require mutable memory. Indeed, we can infer from the Church-Turing thesis that there is no computable function that requires mutable memory.

Turing and Church

Yet Church himself said he found Turing's method more satisfying.

Further it seems somewhat convenient to not consider construction a form of mutation. Lambda calculus is mutating the stack all the time. Unless you have infinite memory, you cannot actually implement a lambda calculus without mutation.

Not to put too find a point on it

Evidently, unless you have infinite memory, you cannot implement a lambda calculus. Or any other Turing-powerful model of computation.

Easier to reason with imperative code?

I see this as the difference between natural numbers and groups. Just like in number-theory where we can operate in a group, I think one could define a "Finite-Turing-Machine". I am not sure you could do the same thing for lambda-calculus? The maths would seem much harder.

Could you say that for memory usage, lambda calculus is harder to reason about than the imperative model (FTM)? In which case one could conclude that people who claim functional programming is easier to reason about, are being selective in their definition of 'reasoning'?

reasoning about memory usage

Modulo lazy evaluation, and assuming predictable tail-call optimizations, reasoning about space usage in lambda calculus is not especially difficult. It's just a matter of knowing which values are sticking around. All those asymptotic notations for space and time still apply.

Even if you evaluate lambda calculus on a 'finite machine', I don't believe it would be more difficult to reason about than an imperative program on a similarly sized machine.

For comparison, reasoning about space/time requirements for logic programming is typically much more difficult, i.e. since search is implicit.

hop

Only if you don't care about cost

I think GC should be implementable in the language, so that different libraries can use different optimised memory management.

A good memory manager needs to be very tightly coupled with compiler and runtime system. I don't think you are aware of the degree of interaction and co-tuning going on between all those parts in modern, high performance language implementations. And that it is pretty mandatory to achieve competitive performance.

You won't get even close with a self-hosted GC implementation. In practice, it is merely a way to ensure that automatic memory management is dog slow, libraries don't compose, and the language has to be inherently unsafe. Boehm's collector is a good example of that, although it is an impressive achievement, as far as C++ goes.

I assume you're responding

I assume you're responding at least partially to the bit about different libraries using different memory management. I'm skeptical of that idea, too. But allowing large applications (browsers, game engines, etc.) to tune their own memory policies would surely provide opportunities to increase performance.

better abstractions?

what would magically fix this? or make it at least easier to see, manage, reify, wrangle, sketch on napkins? would some new language-ecosystem architecture make it such that we can frob the knobs at a higher level? or are we inherently doomed to have people who can keep all aspects in their head at the same time, and implement it in some low level language?

No magic fix

Nothing can "magically" fix this. You have to think of GC primitives (e.g., allocation), as an integral part of the instruction set of the abstract machine a GC'ed language is compiled to. And like with all other instructions, the compiler can try to optimise certain sequences and patterns iff it knows what these instructions do, and what invariants it can rely on. One recent example from V8 is allocation folding, which is only possible if the compiler knows exactly how the GC lays out and manages its memory. (In principle, of course, you could always try to keep broadening an "abstract" GC interface to support all the optimisations you come up with. But the resulting interface won't be very abstract anymore, nor stable enough for libraries to follow.)

Memory Allocation Reflection

In this particular example (allocation folding) it may be easier, as it could be decided at compile time through some kind of memory-allocation reflection. This would be a necessary part of allowing GC to be implemented in libraries anyway.

If that isn't sufficient, optimisations could be specified in the language. So for example, you write a garbage-collector library that includes optimisations for any code that 'uses' the garbage collector. The optimisations might be tree-rewrites on the abstract syntax tree of the language for example. I would prefer to avoid making this necessary though.

Don't see it

I don't see what sort of reflection you have in mind, or how you want to be able to use it at compile time?? In any case, the presence of reflection usually only helps to make optimisations much harder, not easier.

I also don't see how you want to provide compiler optimisations through a library, while still staying sane or safe. For most languages and domains that would be a complete non-starter. I'm also pretty sure it wouldn't compose. Getting optimisation phases into the right order and have them interact in the best way is already a ridiculously hard and subtle problem in a closed-world compiler.

Reflection and Optimisation.

The reflection would be for example being able to search for all memory allocations under an AST node, so that they could be grouped together.

If it wasn't a hard problem it wouldn't be interesting :-)

Reflection?

"Reflection" means to execute user code. In most languages, you cannot do that at compile time, nor do you want that. What you maybe have in mind is a form of staging, but that clearly isn't a widely applicable solution (ignoring the question whether it can work at all in this case).

Reflection

Reflection means to be able to examine the code itself or metadata about it at runtime. Compile time reflection would be an extension of that concept, and I'm not sure if there is a more widely accepted term for it.

From an OOPSLA '87 paper

From an OOPSLA '87 paper (Pattie Maes):

A reflective system is a computational system which is about itself in a causally connected way. In order to substantiate this definition, we next discuss relevant concepts such as computational system, about-ness and causal connection.

(Quote saved in my file partly as an illustration of what happens when definitions are written top-down instead of bottom-up.)

Haskell Type Classes

Haskell type classes allow compile time reflection (of a sort), and they do effectively allow a (logic) program to execute at compile time. Obviously it is better if this is a terminating computation. I wonder if applicative functors could be used to the same end in ML?

hand-wavingly like graphics apis' feature bits

makes me think of how opengl/directx have/had "feature bits" to offer a way to decouple the application from the graphics card. can't say it is the best idea...

OpenGL/DirectX

I don't find it a problem using feature detection in opengl. I prefer the pipeline approach where the driver simulates the missing features, so you just program everything the same, but these days nobody has incomete implementations any more, its all about extra features. A lot of extra features require an API change, and so you need some way of telling if the driver supports the new API before you start to use it.

This is different to the memory issue, there won't be new/incompatible APIs, so there would be no need of feature bits.

Instead this would be like Haskell type classes where the type of some code depends on the memory allocations within it. This probably doesn't work, but it gives a flavour: imagine a monad that exposes all the allocations in a block of code as phantom types in the type signature. Now a type class (think about how the HList code can perform Peano arithmetic at compile time) operates on those allocations by changing the type of the code block. It could rearrange the order, coalesce, or group the allocations by changing the type. It could satisfy the memory requirements by removing the allocation from the type, or replace with a new requirement representing the free memory upper bounds required.

To support this the type system would need to provide row-types and type-indexed-types, so that orthogonal requirements could be managed independently by different libraries.

Interesting.

Seems an interesting claim. Where is the bottleneck?

What is "natural" is irrelevant

People actually do think in terms of time and change, and symbol rebinding is just an extension of that.

Whether this is wrong or write is actually utterly irrelevant. You wouldn't argue that math shouldn't be applied to bridge construction on the grounds that so many people are bad at math, would you?

Instead, you would hope that people building bridges are sufficiently trained in math, because it's essential to 'bridge correctness'. People would never have been able to build bridges if they had sticked to just sculpturing mud, even though that is a much more tangible activity.

never bring Python to a Coq fight

I agree with what you're saying.

We shouldn't conflate intuition or "natural" thinking with useful reasoning. It would be nice to design our systems such that intuitions and natural thinking lead us to correct conclusions and good results. But this can be difficult to achieve, and sometimes we must accept that challenging math is the easiest and most effective way (that we know of) to meet requirements.

What you think ideal is

What you think ideal is irrelevant. It's what people choose and prefer to use. Saying that it's a network effect or "good marketing" is just being an ostrich sticking his head in the sand.

Humans also choose superstitious

Humans also choose superstitious and religious explanations for many phenomena. Whether people favor a form of thinking seems a VERY different question and concern than whether said form of thinking is effective and useful for reasoning.

In the current thread, I've been mostly concerned with the latter question, and thus what people choose and prefer seems quite irrelevant (as Andreas mentioned).

You've described your own ideals before, and you're certainly on the 'code is for humans' end of the human-machine scale. You say: "What you think is ideal is irrelevant." The same is true of you. Your agenda is not served by refusing to acknowledge the weaknesses of untrained human thinking.

(To me, it seems very natural to utilize the growth and adaptability of the human.)

Pointing at network effects isn't about hiding one's head in the sand. It's just the opposite - acknowledging that the world isn't ideal, doesn't prioritize what I prioritize. It also allows me to recognize realities that it seems you'd prefer to ignore, e.g. "what people choose and prefer to use" is not merely a matter of what is 'natural' to them (biologically), but also of what is familiar. People cannot try new or old ways of thinking without exposure to them. It is difficult to distinguish intuition from familiarity.

We aren't vulcans yet

I'm under no illusion on why we program the way we do today, and there is no way my work will succeed unless I can include what people find usable (imperative effects) and remove what's harmful to them (sequential semantics for state update). And if I screw it up, then I'm wrong, not just unpopular and bad at marketing.

As long as we keep programming, language should fit the way we think, not the other way around. We understand change and are equipped to deal with it, more so than we are with dealing static abstractions of change. It's the reason we designed computer hardware the way we did the reason why designed our languages likewise. Von Neumann wasn't a slickly salesman who hated mathematicians. Registers and memory were just natural.

You might be right in 100,000 years (about the amount of time we've used spoken language, arguably what sets us apart from all other animals) but 2000 year old math right now is a tool that is in no way intuitive and not how we understand problems directly. It takes effort to avoid describing things with change. So much so that entire papers are written on how to solve simple problems (for us to talk and think about) in Haskell.

When the computers take over most programming tasks in 20 or so years, then they'll probably transcend inefficient flawed human reasoning.

As long as we keep

As long as we keep programming, language should fit the way we think, not the other way around.

That only seems justifiable if the results of matching programming languages to human thinking exceed the results from the contrary. I don't think the outcome is so obvious as you seem to imply, namely, that no language requiring training in non-human thinking can "outperform" all possible languages that fit the way humans think.

"Outperforms" can be an arbitrary metric, like bug count, not necessarily program performance.

Red herring

Hm, so let's see what humanity has achieved technologically in the last 2-3000 years vs the 100 000 before that. Considering that, what is your point?

PLs should fit the problem, as should our way of thinking about it. That requires training. Assumed notions of naturalness are just as much a dubious argument as in certain socio-political debates.

There are many ways to fit a

There are many ways to fit a problem. It makes sense to select a way that is more natural for humans. Perhaps natural isn't the right word for it, maybe 'easy' or 'closer to something we already know' is better. As a silly example, programming in Java is much more "natural" than programming in ROT13 Java, even though it's just a permutation of the alphabet.

On the other hand there are universals shared across cultures. For example: Bouba/kiki effect. So maybe the term 'natural' isn't so bad after all. That experiment suggests that if you asked a person who is not familiar with our alphabet to match D and T to their sounds, they would choose the correct answer most of the time. So maybe even ROT13 Java is truly less natural, rather than something we're simply not familiar with.

the part that fits the person

Bret Victor described a hammer simply: the head fits the problem, the handle fits the human. Our computational tools similarly need to meet us half way.

Of course, hammers are sometimes problematic and awkward for humans. One can miss, strike one's thumb, throw the hammer accidentally. It takes training or practice to use one effectively and safely (especially if you have a team of hammer wielding humans in close proximity)... and a hammer is one of the least sophisticated tools (ever).

Sean's philosophy holds that our PLs should fit how we naturally think - complete with cognitive biases, fallacies, and so on. The weaknesses of human cognition can make it difficult to fit the problem: making a machine behave in a useful ways, typically as part of a larger system that exists outside one's delusions of control.

I consider my own position - meeting the machine (and larger system) half way - to be reasonably balanced. Rejecting Sean's extreme does not imply advocating the other extreme.

It is one thing to say "I

It is one thing to say "I disagree" and another to blame failure of an idea to take hold on irrational human behavior, bad marketing, or immovable critical mass (network effects).

Most people, programmers even, aren't mathematicians and won't ever become them; so it is quite reasonable for them to favor non-formal ways of doing things...symbol rebinding makes perfect sense in that regard (x was 0 here, now its 12, ok, no problem!). Flawed human beings? Perhaps. But it is ridiculous to ignore them or brush of their biases as being unreasonable.

not ignoring bias

it is ridiculous to ignore them or brush of their biases as being unreasonable

You've got that half right. It's a bad idea to ignore or brush off human biases, prejudices, fallacies, etc.. Where I disagree with you is how to address those concerns, e.g. to create a an environment that helps humans recognize their errors in reasoning, and that helps them gain sound intuitions and understandings useful in avoiding errors. I don't see utility in hiding from humans the fact that machines don't think like humans.

I don't expect people to become mathematicians. You and I both have a goal of lowering the bar for programming, making it more widely accessible starting at younger ages. However, I do expect humans to learn and grow - machine and system-oriented concepts such as composition, concurrency, confinement, extension, security, partitioning, resilience, latency, and so on. My impression, from your exclusive focus on 'natural' thinking, is that you expect users to stagnate.

Anyhow, pointing at network effects isn't about 'blame'. Just about recognizing forces. Consider: your own philosophy - that languages should match how humans think - is not realized by any mainstream PL. Most modern PLs force humans to think like a machine. Worse, they often have many irregularities and discontinuities that make this challenge more difficult than necessary. Are you blaming anyone for this? I doubt it. Why would you bother? But it can be useful to recognize the reasons - such as network effects, the historical requirement for performance, etc. - for use in a future adoption strategy.

And I maintain that symbol rebinding isn't more 'natural'; it's just unnatural in a way you're personally more comfortable with than immutable symbols. If you want natural notions of movement, consider modeling a character picking stuff up (cf. toontalk).

I once had the pleasure of

I once had the pleasure of attending a WGFP meeting with all the very smart Haskell people (from SPJ and many others), I realized that they thought very differently from me and probably most other people. They had "transcended" major limitations in human reasoning. Their vocabulary, their designs, what they saw as elegant, were all very foreign to me. I thought "this is what it is like to live on Vulcan, I guess."

But I also thought that this kind of enlightenment was un-achievable to most of us; perhaps it is not even a desirable goal: our passions often lay elsewhere, computer programming and formalisms are just tools to us, not a way of life. The computer should accommodate how we think, not push us towards evolving into something else. There isn't that much time (on a human evolution scale, anyways) till the computers take over programming anyways, so why bother adapting?

Rather, we should focus on leveraging our built in linguistic capabilities, which is why I'm big on objects, identity (naming, not f(x) = x), encapsulated state. And like it or not, humans are very proficient at describing how to do things step-by-step (cake recipes) and not very good at being declarative (here is a function from ingredients to cake). Finding the nice elegant declarative solution to many problems requires...being a Vulcan and pondering the problem for a long time (e.g. so use parser combinators rather than recursive descent), but often "worse is better," we just need to get things done.

So you are correct: I've decided that the way things are done isn't that far off from the ideal for most programmers. So rather than invent crazily different computational paradigms, perhaps I can sneak something in by changing their programming models a bit (e.g. via Glitch). Such solutions seem to get a lot of mileage (also consider React.js), the very essence of worse is better, I guess.

Both Machine and Human Prefer Mutability

How is immutability half way between a machine, which implements things using mutable memory and registers, and a human, which may find such things a more natural way of thinking? In this case both the head and handle of the hammer prefer mutability.

misapplied intuitions; needs vs. wants

I've not suggested that humans are uncomfortable with mutation. Rather, I've said we're bad at reasoning about state... especially as we scale to having lots of stateful elements with a combinatorial number of states and conditions for achieving them. Therefore, we should avoid unnecessary use of state. Our comfort with state is mistaken. As a hypothesis for why, perhaps we have an intuition (from physical experience) that we can intervene when things progress in an unexpected direction, and this intuition is simply misapplied in context of automation.

In a more general sense, the whole premise of your argument is bad. Meeting humans half way isn't primarily about preferences.

Humans often want things that aren't so good for them in excess - e.g. french fries, beer. Meeting the human half way involves meeting their needs, not just indulging their preferences. Discouraging non-essential use of state by the program is part of that. Of course, there are contexts where humans could have all the state they want... e.g. manipulating a document or live program, where they really can intervene if things progress unexpectedly.

On the machine side, state can hinder optimizations and other features, but in general we can work with it. Avoiding state is more about meeting human needs than machine needs. I would suggest seeking alternatives to imperative state models, which do not generalize nicely for open systems, concurrency, networks, computational fabrics, etc..

We're not bad, it's hard

we're bad at reasoning about state

That's not even the core of the problem. It's not that "we" are bad at reasoning about state, it is that reasoning about state is inherently hard -- for humans as well as for machines. Any adequate model becomes an order of magnitude more complex once you introduce state. Some humans may perceive state as easy, but that's nothing but a fallacy. In reality, it's totally not.

I look forward to reading

I look forward to reading "How to Bake a Cake, Declaratively" by Andreas Rossberg. The book shows how baking is much easier explained when step by step instructions are avoided and everything is described via compositions instead.

You jest, but...

I'd love that format:

cake = bake 300deg (insert 12" round tin (mix [2 eggs, ...]))

It should of course be visually laid out. Text expressions are usually only preferable in the presence of variables and non-literal expressions. Each node could have a time estimate. To provide ordering hints the whole graph could be laid out so that time is the vertical axis.

Let me know when this book is available, Andreas.

postmix

It should of course be visually laid out.

Or at least, if presented using text it should use reverse Polish rather than Polish notation.

Still missing the point

Your underlying assumption that baking a cake is in the same problem class as creating software explains a lot. ;)

We were talking about how

We were talking about how humans naturally think about and solve problems. You made the argument that static reasoning was the norm for us, but everything we do is saturated in juicy mutable state.

No

I made the argument that reasoning is needed for any serious engineering, and that in computing, mutable state makes that substantially harder. It is a (dangerous) fiction that software engineering has any noteworthy resemblance to everyday life.

Harder is not an argument

A C/C++ programmer is paid to solve problems, not to avoid them.

Shared mutable state is hard. Hard but necessary. So what? Get an education, read wikipedia, buy a book, use a library, buy an OS.

I, too, heard the legends

I, too, heard the legends of this one mythical überprogrammer who was actually able to reason -- almost correctly -- about the memory model of the C++ code he was writing. At least during full moon.

Out of arguments?

Yeah well. If we're out of arguments and the gloves come off, I know a few of those too:

Abstraction exposes the mind of the narcist simpleton.

You want it neat and simple. There is no problem with state, it's a convenient manner of programming. There is hardly a problem with mutable state, you can learn or buy solutions.

It lives in your head. 99% of programmers doesn't care. Or Googles a solution.

(Ah well. I could have read your comment positively too. Given the popularity of C/C++, the question never was why C is that bad, the question was why C is that good. And part of the answer is that it makes it possible to break every abstraction; sometimes by default. And academics have a hard time understanding that.)

Ah, now I see

And all my coworkers will be relieved to hear that our C++ code breaking left and right every day was a problem only in our heads, or that we have just been too incompetent to search for solutions. And gaping security holes are just an academic problem anyway, we should stop caring.

You're not going to solve it with applied philosophy either

So what. A thousand applied philosophy professors scoffing at underpaid and undereducated device driver engineers about programs they couldn't have written themselves aren't going to solve the problem either.

And part of the answer is

And part of the answer is that it makes it possible to break every abstraction; sometimes by default.

I'm very skeptical that breaking abstractions are the reason C is still used. Fine control over memory layout and allocation are the only reasons I still drop down to C, ie. writing interpreters that use tagged pointers, or the default inline allocation of structs vs. the typical heap behaviour of structures in every safe language (Haskell can do this too, but it's not as simple as in C).

There's no other compelling reason to use C that I can see.

Breaking abstraction allows for machine aimenable machine code

I nowadays assume that one of the reasons Algol/Pascal/Ada languages didn't overtake C as a language is that C exposes internals of modules therefore allowing programmers to code against an informal specification of the module implementation.

I'll never be able to prove that assumption but it is what I believe. In the end even the Algol languages resulted in applications which died of too much abstraction, i.e., resulted in too slow applications.

skeptical

I'm skeptical of claims C's success stems from breaking abstractions or speed considerations. I figure it's got the same two basic advantages as most populist languages: it's got a simple lucid model and it allows programmers to "just do" what they want to do without a lot of red tape. BASIC and Lisp are/were like that too.

C is a populist language?

Are you kidding me? C is the only language which hardly needs to advertise.

Comparing C to BASIC and Lisp neither makes much sense. Why it lost from Ada on the OS front is more interesting but an endless debate I am not that much interested in.

Lisp is an operational model. Not a language.

C is the only language which

C is the only language which hardly needs to advertise.

Not the only one; and yes, this is what I mean by a populist language.

Comparing C to BASIC and Lisp neither makes much sense.

On the contary; these are languages capable (in context, which for BASIC includes culture of an historical era) of attracting communities of fans who use them because they like them. Most (though not all) languages manage to attract some fans, but not like these. Turbo Pascal had its day, too.

Lisp is an operational model. Not a language.

As you wish. Though for the purpose I was addressing, operational model is the key feature of language. How narrowly that defines a language depends on the language; the magic for Turbo Pascal was narrower than the broad "language" Pascal, the magic for Lisp cuts a much broader swath.

Popular vs Populist

C deservedly takes the lead of PLs, together with Java and C++, in the PL field. It is a popular language since it solves the problems in its wide-scoped problem domain rather well without real contestants. It is therefore popular and doesn't need populist arguments.

Functional languages often fall in the populist category since they are partly supported by the thousands of straw man arguments sold amongst academics, partly to students, without anybody in the room knowledgeable enough to come up with a proper defense against those arguments.

Liking a language isn't an argument in industry ultimately. I like Haskell, well it's pure core, but I still feel I should kick one of my students butts for deploying it in a small industrial setting. It's a waste of his customers money.

Populism

Liking a language isn't an argument in industry ultimately.

Yup. Which is sort of what I was aiming for with the term "populist". The "popularity" of some languages is not driven by what we're told to use; it's a grass-roots thing.

Driven by necessity

Lisp is in constant decline. It is mostly supported by its history, supported library and development base and lack of competition in its niche.

A grass-root argument doesn't take industry very seriously, you must be an academic.

Industry employs almost all students, even PhDs. Almost all intellectual capacity is there and most people don't give a hoots about an academic argument after ten years.

Grass-roots... Yeah, some budget may go into the direction of an experiment but ultimately, it'll be whether you can sell a product on time.

Haskell in an industrial setting will probably deflate to slow, badly written, Bash code under most circumstances. Lisp's time is gone apart from some data processing applications. If either language makes more inroad it's because of a lack of proper languages.

Google trends

populism and industry

A grass-root argument doesn't take industry very seriously, you must be an academic.

I did suggest C's success might stem from two things that make it what I called a "populist" language — a simple lucid model and freedom from red tape. I did say populist languages get their fan base from people liking them. I said BASIC, Lisp, and Turbo Pascal are other examples of such languages. Pretty sure we're all aware BASIC and Turbo Pascal had heydays now past, don't think I remarked on Lisp's history. Haven't so-far addressed the relationships, in general or in these particular cases, between populism/fan base and industry.

What do you figure I'm arguing, that you figure doesn't take industry very seriously?

You are using different dictionaries and metrics than I am

Populism according to Wikipedia

Populism is a political doctrine that appeals to the interests and conceptions (such as fears) of the general people, especially contrasting those interests with the interests of the elite.

Or according to dictionary.com

Pop·u·lism [pop-yuh-liz-uhm]
noun

  1. the political philosophy of the People's party.
  2. ( lowercase ) any of various, often antiestablishment or anti-intellectual political movements or philosophies that offer unorthodox solutions or policies and appeal to the common person rather than according with traditional party or partisan ideologies.
  3. ( lowercase ) grass-roots democracy; working-class activism; egalitarianism.
  4. ( lowercase ) representation or extolling of the common person, the working class, the underdog, etc.: populism in the arts.

I have already stated that C isn't populist, which is according to most people a derogatory term, but popular simply because it's the best language for a wide-scope solution domain according to a large number of metrics.

I did suggest C's success might stem from two things that make it what I called a "populist" language — a simple lucid model and freedom from red tape. I did say populist languages get their fan base from people liking them.

Populism? Fan base? Speed not important? Comparison to Basic and Lisp? Lucid model? Are you ******* kidding me? There are a thousand reasons to chose for C without being a fan which I am not.

Why in God's name do you think your phone works? Because of 'populism', the 'fan base' of the language, and because 'speed isn't important'? The maturity of Basic and Lisp? Do you think people hobby phones together?

I am sorry but you live in a completely different universe from me.

There is no alternative to C, or Java/C++, for a large number of application domains and the academically pushed populist champion Haskell is by a large number of metrics one of the absolutely worst languages ever devised. Not to mention ML. Basic is a toy and Lisp probably shouldn't have been invented.

Actually (though the lexicon

Actually (though the lexicon stuff seems a distraction), my sense of populism is rather opposed to elitism, and is thus closer to the Wiktionary sense —

populism 1. (philosophy) A political doctrine or philosophy that proposes that the rights and powers of ordinary people are exploited by a privileged elite, and supports their struggle to overcome this.

I'd tend to describe Haskell as an elitist language rather than a populist one.

It seems you're misunderstanding me on a number of points, but I'm guessing there's a common cause to all of them, some underlying mismatch of perspective. I wouldn't say different metrics exactly, because the difference appears to go beyond merely what measure is being used, to what is being measured. As best I can tell, you're focused exclusively on what is successful in industry, and having chosen to ignore everything except industry you reach the conclusion that industry is paramount (a conclusion rendered trivial by the exclusive focus). I'm no less interested in industry, but I'm also interested in some other aspects of the situation whose interaction with industry may be enlightening. One can't be enlightened by meaningful interaction of something with industry unless one first admits that something exists that isn't industry.

My sense is that my attitude toward speed is probably pretty much consistent with the worse-is-better essay (which I do think had some valid points here and there, even though I feel it was also fundamentally flawed), Excessive slowness (whatever that means in context) seems like it ought to retard the popular success of a language, but I'm not so sure blinding speed is in itself a good predictor of success.

Opprobrious academic claims and simpleton arguments

Some underlying mismatch of perspective

Yeah, well. I've had it with many opprobrious academic claims regarding the state of industry. This is mostly Haskell, on G+ I sometimes see a "why are we still stuck with buffer overruns and null pointers" posted by Haskell proponents. Well I know why and it looks idiotic if a proponent doesn't. And then when I look at Haskell I can only conclude it's just a bloody bad language, even in comparison to C.

As best I can tell, you're focused exclusively on what is successful in industry

I am not. I am interested in what works and why. You seem to reduce the argument to simpleton academic statements regarding some popularity contest which doesn't exist; another straw man. A language is evaluated by its merits according to metrics; the ultimate metric where all other metrics coalesce being whether you can make money with it.

blinding speed is in itself a good predictor of success

You don't seem to read very well. It depends on the application domain; but speed tends to creep in and effect most applications. You wouldn't want a machine with device drivers, or the OS, written in Haskell, Basic, or Lisp.

If speed is less important in an application domain then you might use it. You end up with applications like www.checkpad.de

Yeah well. As much as I like the effort. It is highly unlikely they are building a better product because of Haskell. I expect they are working around user interface, database, network connection issues with cumbersome code and the little piece where Haskell shines (abstract imaging code) doesn't even make up for the expense they pay for using the language. They'll be having a blast getting it to work despite the language but likely from a software engineering POV the code will be a complete mess. (I could have the same conversation about Ocaml; as a language it is a lot better but I doubt the operational model and, come on, a syntax which belongs in the 70's?)

What I didn't say

It seems you're ascribing positions to me that I haven't taken and aren't mine. Stuff you have a chip on your shoulder about, seemingly. Just for example: you go on and on about Haskell, but I've said next to nothing about Haskell, and what I have said has been negative. I've also said little about industry, really, and what I have said doesn't support your strong accusations of simplisticness.

Then use the right metrics

Then use the right metrics. And populism isn't one of them. And performance is usually a key issue.

This topic could afford a really interesting discussion...

It doesn't even make sense to talk about whether or not populism is a appropriate metric, since it's obviously not a metric. Nor have I taken any position, thus far, on whether or not populism is, in itself, a factor in commercial success of a language. As for performance being usually a key issue — there are lots of nuances to this point, but likely the first thing to keep in mind is that the significance, and even meaning, of "performance" depends on context. I recall someone remarking to me a while back, regarding efficiency of some javascript code, that if you're that worried about efficiency, and you're using javascript, you're doing something wrong.

LtU fails to load on iOs with Mathjax

Well. You're forced to use Javascript on the Web. But yeah, read title, that's the problem I have with LtU nowadays.

If you go straight to

If you go straight to /tracker, it doesn't hang.

Mathjax, javascript, Lisp...

First I've heard of this problem with mathjax. Don't myself own anything with ios; does it not load the page at all, or does it semi-gracefully fail to format the mathjax?

Yeah, you're forced to use javascript... which says something interesting. (There's a deep attitudinal commonality between javascript and Lisp, which I suspect many in both communities wouldn't want to admit; but that's yet another tangent.)

A C/C++ programmer is paid

A C/C++ programmer is paid to solve problems, not to avoid them.

They also get paid to create problematic (e.g. buggy or unmaintainable) solutions, and yet again to solve those problems they just created, and eventually yet again to scratch a failing project and rewrite it. Programmers don't have much economic incentive to avoid creating problems.

This is hardly a unique circumstance in human society. Something similar could be said of industries that pollute the air and waters, damage the ecosystems and exploit their laborers in the name of efficiency, progress. They're not paid to avoid creating problems. Market forces typically don't favor what is good for most people participating in it.

Shared mutable state is hard. Hard but necessary.

Some state is essential, necessary - i.e. when it is part of the problem domain. We can't implement a text editor without state, nor an IRC, nor an MMORPG.

But we can also use state where it is unnecessary, and some languages - like C and C++ - encourage (via path of least resistance) use of significantly more state than is necessary. Many projects suffer a slow, suffocating death by accidental complexity and bugs that can be traced to non-essential uses of state... especially together with concurrency, distribution, or other exacerbating factors.

Get an education, read wikipedia, buy a book, use a library, buy an OS.

Been there. Done that. Are you suggesting these behaviors significantly simplify reasoning about combinatorial states and eliminate state bugs in automated systems? If so, how?

There are probably around 30,000 synchronization points on my OS

And it works. Rather flawlessly.

It's an engineering problem, not a language problem. It doesn't even look to be a hard engineering problem; despite the academic myth. You use an idiom and solve it.

There are some reasons to use more abstract languages. But avoiding state isn't one of them.

You must have a short memory

You must have a short memory. The only reason the OS you're using today is as good as it is, but not flawless, is because of all the static analysis work that goes into it, both in-house (at Microsoft), and by external parties (Linux). The days of Windows 95, 98, and blue screens should not be so easily forgotten (or Mac OS 9 and earlier the Apple fans).

Even Linux has plenty of examples of deadlocks, corruptions and security vulnerabilities in its history, and they continue to this day.

Idioms help, but there are no assurances that those idioms are being used correctly, and whenever an idiom changes, as it has in Linux with the more widespread adoption of read-copy-update, there are plenty of places for mismatched assumptions to creep in.

You live in the past

OS writers need to handle complexity and sometimes implement primitives themselves. No doubt people will even make the same mistakes over and over again.

Avoiding the management of, in my view necessary, complexity is not an argument to propose a non-solution. The whole problem of shared mutuable state seems to have deflated to one for some engineers who do the heavy lifting. Again, it's not an argument to develop, or support, another language.

That is a very strong

That is a very strong statement, do you have any citations for it? Sure, Coverity and other static analysis tools have found bugs in Linux and Windows, but to claim this is the only or even main reason is weird. Infamously self promoting academics non-withstanding.

Win95 and Mac OS 9- didn't even leverage protected mode very well. Win NT and OS X were modern kernels. Also, kernels haven't changed THAT much since the 90s, so they reap the benefits of stability (and have even become low-value commodities to be shared and given away...).

Evidence: there is very little kernel research in the systems field today. Anybody wanting to write their own OS would be laughed out of the grant process. Why? Because, unlike the 90s, the OS kernel field is extremely mature and stable; people have moved on mostly to distributed computing.

I thought it was common

I thought it was common knowledge that Microsoft employs significant static analysis, including providing analysis tools to hardware vendors to verify their drivers. This began after the Windows ME debacle IIRC.

Win NT and OS X were modern kernels.

Debatable. There wasn't anything significant in either kernel that wasn't already 20 years old.

Also, kernels haven't changed THAT much since the 90s, so they reap the benefits of stability (and have even become low-value commodities to be shared and given away...).

The 90s saw the advent of ultra fast low TCB microkernels like L3, L4 and EROS, and the invention of the exokernel, all of which ultimately culminated in hypervisors like Xen. I hope you agree hypervisors had quite a significant impact on industry, even going so far as to impact the design of the x86 instruction set with virtualization extensions.

But I agree that kernel commodotization has ancillary benefits for industry. And certainly old kernels are more stable given the exposure they've had. But as we all know, tests can prove the presence of bugs, not their absence. The L4 and EROS work has shown that these kernels have intrinsic vulnerabilities by design.

I'm not sure what my claims have to do with systems research or the grant process. My claim was only about scalability of manual composition of abstractions with no automated verification of resulting integrity.

Edit: it's also amusing that it took less than 5 years to show up Rob Pike's claim that systems research is irrelevant (hypervisors and virtualization adopted industry-wide). The classic OS structure running unmodified, arbitrarily unsafe code has been pretty thorougly investigated, hence your skepticism, but there are many alternative OS designs that have yet to be fully explored. For instance, operating systems that can only run restricted types of code is still an open arena, of which MS's Singularity was one of the first examples, and Google's NaCL is perhaps the latest incarnation. Proof-carrying code will be another approach in this vein when it matures. OS safety and language safety is converging in many ways, so systems research can overlap PLT research.

Even if they do it isn't an argument to get rid of state

The machine loves state and so do programmers. Except for Clean, and even they do uniqueness typing, I don't know any language which tries to get rid of it.

Again, the fact that you need to manage state, sometimes with a static analyzer, isn't a good reason to develop languages which annihilate it. (Though you can do it as an academic experiment, but in my view these experiments are mostly negative results.)

No one in this thread

No one in this thread suggested eliminating state.

Varied reasoning and moving goalposts

There was some varied reasoning about what to do with state.

The end result is that it's there (and that probably OO handles it best.)

Static analysis has helped,

Static analysis has helped, but it by no means is/was a silver bullet! Bug counts are easy to publicize, however, so that magnify their impact a bit, but there are plenty of other things happening at the same time that are equally influential, if not more so. OS just became kind of boring as kernels stabilized drastically, and people moved on to virtualization and/or distributed systems. My only point with the grants and stuff was to demonstrate how mature it all became.

I went to that talk, I got into an argument with Rob Pike over Java afterwards (completely unrelated to this topic). Anyways, I think systems research as was done did die, but system researchers are pretty versatile at moving to the next thing.

Check this out

Nice, though there are still

Nice, though there are still many imperatives in there.

It's hard to talk to your partner

Therefore you should dump him, or her.

Straw man argument

By your reasoning complexity analysis isn't math.

But I mostly find this a humbug discussion. Program variables were introduced because people needed to name memory locations of the machine; only bad programmers have a real problem with that abstraction.

Today I had a bread in my shopping bag; yesterday it were five oranges. You really have to be entirely stupid not to understand how that works.

Thanks for properly

Thanks for properly categorizing your straw man arguments.

Well. At least I present arguments.

People think about changing stateful things, mostly the people around them, all the time. It's where all the mental effort goes.

Stones are taken for granted.

So far your attempt to shoehorn programming into mathematical static notations have only resulted in arguments bordering on delusional.

your attempt to shoehorn

your attempt to shoehorn programming into mathematical static notations

To what are you referring? (I'm not aware of any such attempt.)

arguments bordering on delusional

Any argument for change, as though people would change their behavior based on arguments, is more than a little delusional. OTOH, pointing at what we've got today and claiming there are no significant problems would be similarly delusional. ;)

Well. It ain't delusional to ask for more money

your attempt to shoehorn programming into mathematical static notations

This was a referral to your comments on logics, math, and equational reasoning.

Any argument for change, as though people change based on arguments, is more than a little delusional. OTOH, pointing at what we've got today and claiming there are no significant problems would be similarly delusional. ;)

There are always significant problems. But at the same time, my iPad works fine, my old cellphone works, my TV is digital and never blacks out, Fedora runs okay on my computer and Google Chrome hasn't crashed in over a year. That while buildings and bridges collapse on a regular basis. (And also, snarkily, Ocaml runs out of stack space and Haskell doesn't perform.)

My philosophy professor once pointed out that the western world is in a "philosophical crisis." That translates to: I want more (research) money.

The quest for better abstract languages, or formal correctness, by pointing out problems is often, sometimes mostly, a proposal for research funding. That made sense to decision makers when Window 3.1 was crashing all the time but makes less sense these days.

We need better languages but imperative is here to stay. And they'll probably come up with other narratives for research money since Dijkstra's punch card problem is gone, as is Windows 3.1.

This was a referral to your

This was a referral to your comments on logics, math, and equational reasoning.

Those are useful forms of reasoning, and we'd be wise to leverage them where we can. But I have not suggested forcing all subprograms into that mold. And there are other useful forms of reasoning (e.g. compositional).

my iPad works fine, my old cellphone works, my TV is digital and never blacks out, Fedora runs okay on my computer and Google Chrome hasn't crashed in over a year.

What does 'works fine' mean to you? Are you suggesting your hardware is working near its potential? Or that you have no complaints as a user? Or merely that you can eek some useful work out of it, and you've accepted the effort involved?

To me, it seems my hardware is trudging along, working around awkward artificial walls between applications. As a user, I have great difficulty moving data between applications and services and policies and alerts. The system is inefficient, inflexible, fragile, and has lots of jagged edges to cut oneself upon.

I would like to leverage multiple applications to achieve some ad-hoc purpose, then abstract the combination in a robust way for reuse, refinement, and sharing. But that ideal is very far from what we have today, far from what is feasible with modern mainstream PLs.

imperative is here to stay

Of course. The relevant question is whether it will be pervasive or marginalized, primary or secondary. Imperative has many weaknesses other than just being difficult to reason about.

The quest for better abstract languages, or formal correctness, by pointing out problems is often, sometimes mostly, a proposal for research funding.

Perhaps a few people are seeking funding and using PLs as the excuse. But it seems to me that money was not a significant motivation for developing projects like Agda, Coq, Idris, BOBJ, ATS, Elm, and so on. Is there a particular project you're thinking about?

Depends on your perspective

Everything industrial or academic is driven by money. They all go through the same route, from reviewed proposal to project to end-result to evaluation.

I have a former student who I failed to inform I should've kicked his butt as an engineer for not acknowledging the agendas around him and using an academic language in an ill-suited industrial setting.

Money is often the fuel, but

Money is often the fuel, but is not always the destination or purpose (not even in industry). I acknowledge that progress is often constrained by money. But I don't believe money is the goal for most PL research, nor that pointing out problems is primarily a plea for funding.

The only money-oriented programming language I'm aware of is Java. :)

There's money involved everywhere

The only not money-oriented languages I'm aware of are the esoteric hobby languages like Brainf*ck. C# needs to make Microsoft money somewhere. GHC needs to be a success or they won't receive PhD funding anymore, their scientific programmers are taken away, and salaries don't go up anymore.

If you want a research grant you'll need to fight for it. If you want people to take your research, or field, serious in the long run you'll need to convince policy makers it is a serious field.

There's always politics and money involved in keeping any language, or research field, alive. Hence you'll hardly see any truly acknowledged negative results or open academic discussions.

"The world has a philosophical crisis" and "We'll fix software engineering with mathematical languages" is the same sales-talk to me.

I don't copy any academic view unless I understand the agenda. I do support most research somewhat but it does mess too much with the students. They need to unlearn too much in industry.

supporting research

I do support most research somewhat

Now you've got me curious. After all this talk of money, how do you support research? Your usual brand of armchair cynicism? or actual cash?

I don't count but I would appreciate an open debate sometimes

Well. I do support most research morally, although my vote doesn't matter.

But what I do think is that there is a discrepancy between what people on LtU are being told from either teachers or rather vocal PL evangelists, the papers they read, and internal debates by those in charge.

People in academia can hardly be called stupid and they'll have their own private discussions we don't have access to. It's a shame really because I am a bit tired of reading between the lines.

Always be critical and never

Always be critical and never trust authority outright. If you can think on your own, then it doesn't matter what the evangelists say; you just have to learn to distinguish between "evidence" and "anecdote."

I was lucky to grow up in a pragmatic CS program that didn't spew much dogma. We learned functional programming, we learned pure OO programming (Smalltalk style), but no judgments were made in anything being "the way." It was just pure enrichment.

Human factors

I agree with you, but what is natural is quite important to human factors. A bridge that is physically sound but scares the hell of out people is not useful.

Abstracting from Gravity

Abstracting from gravity will only create bridges which crumble under the mass of their load.

scary bridges

Just the opposite. Scary bridges are scary, but still useful. Whereas unsound bridges that look nice are more deceptive and dangerous than useful. If you're the type to get this backwards, please do so on your own property and use plenty of warning signs to keep guests and trespassers safe.

An interesting phenomenon is human adaptability. What is 'scary' depends on one's experiences. One can become comfortable with a scary bridge, knowing that it is sound. Similar applies for human factors in other aspects. What is "natural" is significantly a function of what is "familiar".

I believe we should leverage human adaptability, have humans meet the machine half way - even if they're a little scared or uncomfortable at first. If we find some absolute limits on human adaptability, we can address those as we find them.

It was a metaphor

Don't take it too literally. Obviously I wasn't suggesting that unsound bridges are ok.

Hm

I was not suggesting to transform

var x = e1 in
x = e2(x);
x = e3(x);

into

let x1 = e1 in
let x2 = e2(x1) in
let x3 = e3(x2) in

which I agree is often painful (even though a monadic abstraction can help), but rather

let x = ref e1 in
x := e2(!x);
x := e2(!x);

(which can also be interpreted monadically using "Lightweight Monadic Programming" techniques, if that's your cup of tea, or typed in a reasonable effect-typing system.)

I'm assuming the last

I'm assuming the last assignment is to x, not x3 right?

Loss prevention

Say I have a mutable variable X and use SSA to transform it into XA and XB. If I run a dependency tracker, I lose the fact that XA and XB were the same X before, and am now tracking separate XA and XB variables! That I was reading and writing just X is semantically useful information, and we've lost that by using immutable variables.

You haven't "lost that by using immutable variables", you've lost it by performing a lossy transformation. Nothing stops you from using immutable variables but retaining the information about the relationship between them.

What's the half-life of

What's the half-life of company sponsored languages versus independent languages? (C & C++ are, in this context, independent languages; they weren't designed to achieve developer lock in).

Apple seeks to raise development costs

Apple maintains itself as the monopoly retailer of applications on some of its devices, notably phones and pads.

As retail businesses go, the app business has some weird properties:

1. It operates on consignment. Apple acquires its inventory of apps at roughly no cost.

2. From Apple's perspective, there is no difference among apps other than price. For the app store, all apps at a given price comprise a uniform commodity of undifferentiated units. Apple makes just as much money no matter which one you buy.

3. Apple's cost per sale is approximately invariant regardless of application price. For example, Apple does not have to pay salesmen to spend a lot of time closing sales on high-end apps. It costs Apple just as much to sell an app for $1.50 as it costs to sell an app for $15.00.

4. Apple's goal is to try to maximize its total commission on app sales and to minimize its total cost of sales. This can contradict the goals of app developers. You can see this by asking how much $1M in app sales earns Apple. The answer depends in part on the price of the app. $1M in sales for a $5 app is more profitable for Apple than $1M in sales for a $0.75 app. In fact, $1M sales of a $5 app may be worth more to Apple than, say, $1.1M in sales for a $0.75 app.

Thus, Apple can in some situations have incentive to reduce the overall volume of sales -- against the interest of developers taken as a group -- if that reduction in sales volume nets Apple higher profit.

You might ask: Why wouldn't Apple sell both? $1M from the $5 app and $1M from the $0.75 app? Apple, in fact, does this to some extent. Doesn't it always work this way?

I'd argue no: Apple needs to actively limit the sales of low-end apps.

Here is the problem for Apple: Suppose that the marginal cost of porting a multi-device application to Apple products was reliably close to $0. Then Apple would be (even more) flooded with very low priced apps from developers just catching-as-catch can any trickle of revenue they can seize. The rate of Apple's return on expenditures for the app store would plummet.

So then what can we glean from this about Apple's insistence on non-standard programming environments and programming languages?

Apple's idiosyncratic platforms raise the cost of developing for its platforms somewhat artificially. Apple's charges for developer kits, non-standard APIs, and non-standard languages all raise the cost barrier to entry for new apps.

Consequently, Apple's approach has the net effect of raising the average price of apps in its app store.

As we've seen: up to a point, raising the average price of apps can boost Apple's profit even if that means holding back the app market from its full potential size.

Rounding errors

The cost to Apple for distributing an app is effectively a rounding error. Likewise, for businesses, the cost of the SDK / developer program is also a rounding error.

Occam's razor applies here: Companies want to sell their apps on iOS because that's where the most money is to be made. Developers dislike Objective-C, for well understood reasons. Apple themselves write a large amount of software in Objective-C. Google and Microsoft have more productive languages to work with. Apple wants added productivity for themselves and their developers. No existing language meets their needs for platform interop and performance.

Designing a new language was a no brainer on those grounds alone. Frankly, I'm surprised that it took this long, although there's evidence that this has been the plan for years (adding blocks to ObjC runtime, etc).

dispute: "The cost to Apple for distributing an app ..."

I dispute this claim:

The cost to Apple for distributing an app is effectively a rounding error.

The global payments system imposes non-trivial transaction costs on app distribution.

Likewise, for businesses, the cost of the SDK / developer program is also a rounding error.

What do you think the average "company valuation" is of entities selling apps? I think it is quite low. Eliminating developer fees would lower that average valuation even further. The cost of developer fees (and other Apple-specific costs) is expensive to many app developers.

I also question this claim:

Apple wants added productivity for themselves and their developers.

Really? Is Apple experiencing some app supply problem I haven't heard about? Every indication I know says they are limited by the demand side.

Further, productivity for developers does nothing for Apple's profit but harm it. Again, because Apple is the retailer here, not the producer of apps.

In this conclusion:

Designing a new language was a no brainer on those grounds alone.

You have had to assume that Apple is acting against its own interests to reach this supposed "no brainer".

p.s.: Back in the 1980s, competing personal computer platforms really did "want higher productivity for their developers". Back then there was a supply problem with applications and, back then, the platform vendors were not monopoly app retailers.

I still think your analysis

I still think your analysis is way off. The cost for distributing apps looks to me to be a second order effect. The first order effects are total revenue per iDevice owner and number of iDevices sold. Driving the cost of apps down helps with the latter, for sure. I'm not sure what the optimum price/app would be for the former, but I expect that's a more important consideration than overhead on app sales.

Note: it would be very easy for Apple to add a fixed cost per app (e.g. 10 cents + 30% instead of just 30%). But the main thing they could do to raise average app price is to change their discoverability formulas. Right now the emphasis is on popularity defined by number of sales. The cheapest apps make it to the top of those lists. The fact that they aren't trying to fix either of these things says to me that your analysis is way off.

The language is there to help boost productivity which helps create lock in. It's just about making iPhone apps easier to build than Android apps.

app sale transaction costs

You are mistake about transaction costs.

A random article on ZDNET offers a breakdown of app sale transaction costs as estimated by Piper Jaffray.

Ca. 2010 or 2011 the average app sold for $1.44.

A sale of $1.55 broke down to:

$1.09 to the devloper.
$0.23 financial transaction fees.
$0.02 internal cost of sales ("processing")
$0.21 Apple's margin

Thus, you can see that financial transaction costs are not only significant but they are a 25% mark-up over wholesale and comprise half of the total mark-up over wholesale. (Note that as a percentage of a sale, financial transaction fees are smaller for higher sales prices.)

http://www.zdnet.com/blog/btl/apples-app-store-economics-average-paid-app-sale-price-goes-for-1-44/52154

And again, about this:

The language is there to help boost productivity which helps create lock in. It's just about making iPhone apps easier to build than Android apps.

What problem that Apple actually has in the real world would be solved by this "productivity boost" and "lock in"?

Thanks for the break down on

Thanks for the break down on app price. That 23 cents is considerably higher than I was imagining.

What problem that Apple actually has in the real world would be solved by this "productivity boost" and "lock in"?

The main problem they have is maintaining market share vs. their main competitor, Android. If they can make it easier for developers to put out high quality apps on their platform, that will be a meaningful advantage. Even if it's just the difference between developers targeting iPhone first and letting Android be a port some number of months, that's huge.

re thanks and "vs. android"

You're welcome and thanks for the stimulating conversation. That said, sorry but...

I dispute this (specifically the part to which I added emphasis):

The main problem they have is maintaining market share vs. their main competitor, Android. If they can make it easier for developers to put out high quality apps on their platform, that will be a meaningful advantage. Even if it's just the difference between developers targeting iPhone first and letting Android be a port some number of months, that's huge.

I don't think comparative availability of apps much drives sales.

I think that probably the qualities that get advertised are what drive sales.

So android device sales are driven by price, large screens, nicer telecom plans, and by Not Being Apple.

And apple device sales are driven because apple products will make you young and pretty, energetic, rich, and distracted from the gritty reality of the near apocalypse in which you lead your wretched life.

Sometimes apple ads show off obscure apps like the current ads featuring a Pixies song and a composer of orchestral music. The suggestion of these ads doesn't seem to be "Only on an Apple do you find such apps." Rather, it seems to be "an iphone will make you a much more interesting person than that loser you are today compared to these attractive people on your tv screen".

Yeah, I don't find that

Yeah, I don't find that point (that I made) particularly convincing either. Things are changing quickly. Five years ago, I think quality of app ecosystem was a big driver of preferring the iPhone, but it's not as much today.

I still think Apple has reason to want their development environment to be attractive. Maybe it's not about competition with Android, but they still increase total revenue by increasing the number of useful apps available. I do not think it's the case that people are all going to buy a fixed number of dollars of apps, and relative app quality just decides which ones, as your original analysis implied. For example, I think there are many people who are buying all of the high quality games available available in certain genres and are willing to buy more. So I think there is some unmet demand. And then you have niche apps and small business apps where they're only going to write them for a single type of tablet.

And I also think you're wrong that Apple wants app prices higher for the simple reason that it would be very easy for them to drive the prices higher, but they aren't. Maybe it just boils down to total revenue (per customer, not per app) being higher at lower app price points. Because even if they did want to raise app price, the idea that they would add deliberate impediments to their app development system to achieve it seems crazy to me. Why not try to capture some of that value instead?

optimizing app prices

it would be very easy for them to drive the prices higher, but they aren't.

My claim is that the optimal price for apps (for the sake of Apple's profit) is higher than what the price would be if it was a standard platform with no "tax" on developers. That is not the same thing as saying Apple wants to make the prices as high as can be.

Because even if they did want to raise app price, the idea that they would add deliberate impediments to their app development system to achieve it seems crazy to me. Why not try to capture some of that value instead?

Up until sometime in the 20th century it was common practice for U.S. farmers to burn crops to prevent deflationary over-production. Today they have more sophisticated arrangements to accomplish the same net effect of keeping food production below where it could be, in order to keep the price high enough to make it worth growing and selling at all.

Commodities are weird like that and "productivity enhancements" make the problem of overproduction worse.

(The problem of overproduction is, incidentally, why certain heterodox economists argue that technological increases in productivity must eventually -- or may even already have -- bring about an end to an economy based on the capitalist exchange of commodities. A symptom, it could be argued, is Apple's need to monopolize app retail.)

I agree

They sell high-end hardware and also need a high-end app store with the most and best applications that run on that hardware.

It looks more like a "we-cannot-sit-still-because-competition-might-suddenly-leap-ahead" strategy language.

If it turns out to be a competitive solution then they'll either port it to other platforms or someone else will port (as in: write another compiler) it for them.

Good. So the point is not

Good. So the point is not developer lock in, more like the opposite: make life easier for developers (though I assume that can also lead to lock in...) in order to sell more hardware. This makes sense. I suppose the C# story could be told this way too.

This means that you need to produce a superior language. Takes some balls to try to do this in this day and age.

re superior language

There is the first problem with this "make life easier for developers" hypothesis:

This means that you need to produce a superior language.

They have very clearly not tried to do so. Swift does not significantly depart from or add to existing comparable languages. They make no claim otherwise.

It's strange watching other people try to analyze the app market as anything but the low-value commodity market it is.

That's the catch, surely:

That's the catch, surely: Why produce a new languages? What's the point?

Superior is the Sweet Spot between the Abstract and the Concrete

I think I could explain that but those who have tried to follow my incoherent musings will understand enough that I'll settle for the title.

There are lots of superiors, or local maxima, in the design space.

I didn't mean to imply

I didn't mean to imply global maxima. Thinking you can achieve local maxima is hubris enough.

Which is why we need an ecosystem

I find it entirely unlikely that anyone can a-priori engineer a language to fit niches, or broad markets, in the programming field.

Darwinism justifiably deals with selection criteria the best.

Why produce a new language

Why produce a new languages? What's the point?

Me, personally? The one I'm working on I'm working on because I have found a programming style that I quite like but that fits awkwardly at best into any algol-type or functional language.

Apple? Another message in this thread linked to the language inventor describing Swift as initially a skunkworks project and then a project of a larger developer tools group.

AFAICT a developer tools group at Apple serves two masters:

1. The people who manage the business as a buyer and seller of commodities and who have a deep understanding of that perspective. These people are concerned with issues like the spectrum of prices of apps sold; the spectrum of prices offered in the app store; and so forth.

2. The marketing people who manage a "brand" that is presented to developers. These people are concerned with maintaining excitement, the appearance of Progress, etc.

So I think Swift started with a guy who was playing around for personal reasons. Swift is consistent with the commodity strategies. Swift is OK as a marketing tool.

Kind of a boring explanation but it's kind of a boring language, if you ask me. :-)

In Apple's case...

It means some political faction within Apple has found it was advantageous (to themselves) to create Swift. Just as there was (maybe still is) a political faction that's kept Objective C alive all this time (it has no redeeming qualities to anybody else) as they built their careers upon it. This is how it works in big software corps. Internal organizational self-interest. This may or may not correspond to larger company goals, customer desires, developer productivity, etc. As long as the parties involved manage to hang on to their piece of the internal pie, this is what you'll get. Often this appears from the outside as entrenched idiocy, but from the inside from an internal power-struggle point of view it makes perfect sense.

superior to what?

It's clearly superior than Objective-C and JavaScript, at least on par with Java, and in the same ballpark as C#.

They very clearly tried to create a language that plays nice with the ObjC runtime and wasn't too alien for their existing developers. Seems like they nailed those goals, to me.

Can you explain why playing

Can you explain why playing nice the the ObjC runtime requires a new language rather than an implementation of an existing one? I know why this can be the case; I am after specifics.

Compare JVM CL ports to Clojure

Ignoring the unique parts of Clojure, the hosted design is fundamental to the language. You can't implement Common Lisp or Scheme without concerning yourself with the representation of primitives such as strings or numerics. A language may depend on a standard library that depends on features (such as continuations) that the platform simply can't provide.

Objective-C's runtime is sufficiently different from most language runtimes to justify a new language. And yes, Apple wants to control it. No nefarious purpose necessary: They want to be able to evolve it at their own pace and without undue influence from standards bodies or existing stakeholders.

C# and OLE/COM

Isn't this the same reason for C#. In C/C++ the OLE/COM bindings for Windows is complex boilerplate. The original motivation for C# was to hide the boilerplate. This is all related to runtime dynamic object binding. For each language on the left, it hides the boilerplate needed in C/C++ to interact with the dynamic-object runtime-plugin system.

C# = OLE/COM
Java = Beans
Objective-c = Cocoa/Objective-c
Swift = Cocoa/Objective-c
JavaScript = XUL/XPCOM (using mozilla's framework).

I can't think of any language that embeds CORBA.

I'm pretty sure that playing

I'm pretty sure that playing nice with the the ObjC runtime DOESN'T require a new language. There are many languages that work reasonably well (I'm told) with the ObjC runtime. Particularly notable is Pragmatic Smalltalk (part of EtoileOS), which links a prototype-based Smalltalk to GNUStep. The reason I mention this example is that it's being written by a group that seems rather perfectionist, in the good sense - they wouldn't be doing the project if it wasn't working well. (It's very incomplete, and seems to be more or less stagnant at the moment, but I think that's because they're hobbyists with other jobs not because of any technical problem.)

Requires a dynamic language

Playing nice with ObjC runtime (without masses of boilerplate code) requires a dynamic language with a specific runtime for that platforms dynamic object system. This mainly seems to be scripting languages like JavaScript and Python.

Presumably Apple wanted a 'C' like language of the Java/C# type, that played well with ObjC runtime. They could have used Java, but presumably did not like being controlled by Oracle's license, or C# but felt the same way about Microsoft.

Swift doesn't have to be better...

...than Java or C# or JS or the Google language of the month, or C++ for that matter.

It just has to be better than Objective-C, not a high hurdle to leap.

Needs to beat C/C++ somewhere though

I've been playing a silly nice game called Arcane Legends from time to time. Runs seamlessly on iOs, Android, PC and Mac and even in your (Google Chrome) browser. Apart from the game, that's a technological marvel.

I have tried to find out how they achieved that but the Internet is low on technical detail. So guesstimating I assume they have a low level OpenGL ES API with C++ on top of that with thin bindings to the several platforms.

Apple might manage to lock-in lots of iOs programmers but the smarter ones will go down the C++/OpenGL route and they will simply make more money.

Apple can go two routes: make it an open language such that more app developers make a living or make it an iOs locked tool. Since they're a hardware company primarily I guess they'll choose the latter.

It's interesting

Of the three mobile platforms, the one with the crappiest tools (that would be Apple) is generally regarded to produce the best product and best user experience--and the one with the best developer tools (arguably Microsoft) produces phones that nobody wants to buy.

(Android is somewhere in the middle in both counts--I'd rather use Java than ObjC, but C# is a nicer apps language than Java is).

Obviously, much of the above is opinion and speculation--but it is interesting food for thought.

Cross platform mobile

I have worked on cross platform mobile projects that did exactly that. I wrote the Open-GL ES and Open-GL rendering engines. As Android and iOS are both *nix based network and file IO is fairly easy from the low level too. We wrote a platform specific library to wrap stuff that was different, but all graphics was in Open-GL, and everything else cross platform C++. Worked on desktop windows/Linux/Mac too.

OpenGL

Didn't see this comment. Apple has started to place a library onto of OpenGL with GLKit. Right now GLKit just abstracts OpenGL and tweaks it towards Apple standards. I wouldn't be shocked if the next step is reversing this so that the drivers natively support much less in hardware which works towards GLKit and the full OpenGL renders using GLKit.

A Mistake

Since Jobs had gone, apple really seem to be losing their direction. Instead of focusing on great product and usability, they seem to be shifting to locking developers into their platform. This will be bad for users. Look at how well it worked for Microsoft. They may do great for a while, they may even dominate the phone platform, like Microsoft dominated the desktop and IBM the mainframe before them, but eventually the next innovative wave of technology will come along, and they will be so busy defending their fortress they will completely miss it until its too late.

Open source is our only hope. Actually open standards are enough really. Mandate open standards for all public contracts, I think this view is gaining some traction in europe at least.

Microsoft

Look at how well it worked for Microsoft.

Over an entire generation they dominated the desktop environment. They have a desktop userbase about 6x the size everyone else combined. They are the largest software company in the world with growing profits. It worked out extremely well for Microsoft regardless of anything that happens next.

but eventually the next innovative wave of technology will come along, and they will be so busy defending their fortress they will completely miss it until its too late.

OK. So what? At 7% a dollar earned 40 years from now is worth $.06 today, at 12% it is worth a $.01 today. Moreover being open doesn't assure success.

Mandate open standards for all public contracts, I think this view is gaining some traction in europe at least.

Companies like Apple and Microsoft can very easily meet formal criteria for open standards while not being in no true meaningful sense open. Swift is going to be a great example of this, Swift is going to be an open standard just one that is for all practical purposes closed.

Governments can of course write the standards themselves or appoint quasi-governmental institutions to do so. But then they don't get broad industry buy-in. Standards without much corporate involvement is often very popular with public interests groups and governments everywhere on many topics. In practice it doesn't work well. Government contracting is about satisfying many conflicting interests. Governments can easily cultivate specialized application vendors who satisfy narrow public niches. But they aren't tied to the commercial vendors those vendors often end up being high cost low quality alternatives whose bids are mostly controlled by their political connections. That is to say you get Open Standards but they travel hand in hand with government services that are more expensive to administer than their private alternatives and massive corruption of public figures. That leads to a privatization push.

Governments reflect the society they operate in. Unless there is broad societal support for Open Standards governments aren't likely to want to push hard enough to create them. Apple is successful because people want the advantages of tight vertical integration and don't particularly care about open standards. I get that you disagree but that is irrelevant you are asking the government to act against the public interest in circumventing the choices that consumers are with understanding making.

I have tremendous doubts that Europe will be successful in pushing Apple towards open standards anymore than they have been with Microsoft.

Disappointing

Its disappointing, but you may well be right. On the other hand, Google's attempt to commoditise the phone platform, where you can build your own phone from components may be successful. I can see phone shops liking the ability to differentiate for their customers by building phones with different custom specs, which will of course then run Android. In a way Google is positioning itself to be the Microsoft of the mobile phone (supplying software but not owning the platform). Apple is repeating the IBM play of trying to maintain tight control over everything. Even Apple's attempt to get people to buy into the 'Apple' culture to develop on the platform is reminiscent of the Priesthood from the mainframe days. After all Microsoft outsold Apple on the desktop despite Apple's tight vertical integration - so people can't value it that much, but there are probably too many factors at play to draw meaningful conclusions.

Disappointing

Art is going to be a fine platform for low service applications but interesting applications on Android are going to be tightly tied to an entire ecosystem of webservices (Google Play) with Google sitting in the center. Google isn't in the OS or hardware business. They can't push through changes to their operating system so they are creating a two tier operating system. The bottom tier is a hardware diverse operating that is likely going to lag aimed more and more at emerging markets and a top tier service operating system vertically integrated with their webservices. So that's not going to help your problem any. With Android, in a developed market, you are going to have the same sort of thing as Swift. Instead of having to integrate with Cocoa APIs you will have to integrate with Google's APIs.

After all Microsoft outsold Apple on the desktop despite Apple's tight vertical integration - so people can't value it that much

I'm not sure you can look at the market that way. At the price points Apple's laptops compete in (essentially $1k plus) Apple has 85-91% marketshare. There is just a huge market for lower priced computers which Apple doesn't sell at all. The average Apple laptop sells for 2.7x what the average Windows computer sells for. JavaVM handsets are going to outsell iOS handsets this year, that doesn't mean that people excluding price prefer JavaVM to iOS.

As for the original dominance it happened almost immediately. By 1986 IBM/Microsoft had over 80% share. Apple in its history has briefly broken 10% share. 1986 is a good year for discussion because '87 is when IBM releases MCA and Microsoft starts pushing the Microsoft/Intel/Western Digital standard, making x86 not IBM the platform. In 1986 there isn't really any difference in integration between Apple and IBM. From 1987 forward there is. So certainly Apple lost but they didn't lose as a result of open vs. closed.

Mostly Makes Sense

That mostly makes sense. However it could go a different way. Developers will develop Android apps because it is the phone with the largest market share. This could eventually dominate, and push into the higher end. Betamax lost to VHS despite a superior product because VHS had better availability of products that viewers wanted to watch. Time and time again the mass market offers such economies of scale that they out develop the high end players. For example expensive low volume CPUs used to dominate scientific computing, but the momentum behind x86 drove all the speciality chips out of business. Unix machines now use x86 instead of Alpha, Mips, PA-Risc etc.

Android long term

No question that's a possible threat. Of course Apple has been mostly successfully dealing with cheaper, lower quality, more heavily purchased products for over a generation. Androids spread could be much worse than simply phones. As Google moves down market into all sorts of embedded devices ( http://en.wikipedia.org/wiki/Internet_of_Things ) their marketshare over the next 20 years might not be 5-6x Apple's but more like 50x times Apple's share.

But in the end a few hundred million of the most financially influential people in the world can easily keep a product afloat. It certainly could be an interesting race as to whether Android phones can get cheap enough that quality doesn't matter faster than iOS's advantages and dominance at high price points allow it to establish an entrenched deep ecosystem.

But regardless I don't see how your betting on Google to win the race matters for the discussion about open standards. Apple's objective is to establish that entrenchment not to fight it. Apple has no interest in fighting it out with Asian manufacturers to see who can put generic parts in a box for less money.

Developer Lockin is the focus

I think what you are seeing is the marketing group seeing what is happening in other environments (such as Android and Google language development) and saying that we need to be on the same band wagon.

I don't know how much experience you may have had with marketing groups but all my experience with them is that they think quite differently to normal folk and have no problem with jumping on someone else's band wagon if they think it will give them a boost in the market. They like to keep what is theirs and get more of what belongs to the competition.

Technical superiority is not really their forte, whereas smoke and mirrors are. As one director of a company said to me years ago, it's not about having a superior product, it's about appearing to have a superior product (however that superiority is expressed).

Once you have a customer locked in, it can be quite difficult for them to change to someone else. The customers here being the developers.

If this move is successful for them in gleaning a high developer conversion, when someone then tries to migrate the language to a new environment not controlled by Apple, I would then expect to see the legal teams come out to play.

Apple is not just about hardware, it is about an entire environment, hardware, software and wetware.

irrelevant amounts of money

At Apple's volume, their per transaction costs are significantly lower than you or I could get from some middleman transaction processor. Stripe offers 2.9% + 30 cents per transaction. Meanwhile, Apple takes 30 cents per $1 iTunes song sale. Surely they aren't making zero profit per song...

To a student or one-man shop, a $100 developer fee may be prohibitive, but their apps are effectively irrelevant. If you're not making enough money to recoup the $100 fee, then you're not a business worth caring about. This was true when I was watched the Xbox Indie Games store from inside Microsoft. The purpose of the $100 fee was primarily a spam filter, not a significant source of revenue.

http://www.apple.com/about/job-creation/ shows that they have 275,000 registered iOS developers. Apple generates over $1B in revenue per week, so they earn that much money in about 6 hours.

So still I content that, to Apple, both per-transaction costs and developer fees are as close to rounding errors as you can get.

Now, for developers. An unscientific "I'm feeling lucky" search returns http://www.indeed.com/salary/q-Ios-Developer-l-New-York,-NY.html which shows $130k/yr salary for iOS developers in New York. Therefore, the cost of the developer fee is dwarfed by two man-hours. To say nothing of additional engineers, designers, product management, marketing, etc. I won't mention the cost of hardware, since every one of these developers is going to have a company issued MacBook Pro and an iPhone whether they are writing iOS apps or server-side Linux software.

Is Apple experiencing some app supply problem I haven't heard about?

Straw man. I know for a fact that critical apps, such as Facebook's, require more time and money to develop for iOS than for Android. If no other apps existed at all, Apple would prefer Facebook have new functionality be best and first on iOS.

"make it up in volume"

You sound like the old joke about the car salesman who says he loses money on every sale but he makes it up in volume.

At Apple's volume, their per transaction costs are significantly lower than you or I could get from some middleman transaction processor.

That's somewhat true. Did I say otherwise somewhere? Show me where. Even so, payment processors have no reason to be generous to Apple. They will take the largest share they can take.

Meanwhile, Apple takes 30 cents per $1 iTunes song sale. Surely they aren't making zero profit per song...

Did you look at the figures? The bet is Apple has a gross profit of about $0.15 out of those 30 cents (perhaps less).

You have made my point exactly here when you say:

To a student or one-man shop, a $100 developer fee may be prohibitive, but their apps are effectively irrelevant. If you're not making enough money to recoup the $100 fee, then you're not a business worth caring about.

And that is one of the main functions of the $100 fee: to discourage you from flooding them with low price apps to just catch-profit-as-catch-can.

The purpose of the $100 fee was primarily a spam filter, not a significant source of revenue.

Please show me where I claimed that developer fees were a significant source of revenue.

So still I content that, to Apple, both per-transaction costs and developer fees are as close to rounding errors as you can get.

Yes, but you've only gotten there by ignoring or misunderstanding the reasonably well sourced financial data before your eyes.

App store revenue is almost irrelevant

Your analysis is interesting; unfortunately, I think it is ultimately irrelevant.

People buy iPhones and iPads for the apps, and most of Apple's profit comes from iPhones and iPads. Using numbers for 2013, it looks like Apple had profits of ~$37B, App store revenue of ~$10B, and App store profit share (at 30%) of ~$3B.

As the app store is important for iPhone/iPad sales, I'm sure that Apple would run the App store at a loss if they had to. I do not think they'll manipulate what apps are available against what they think is overall good for the user (including branding) just to receive a small amount of extra profit from apps sales. They may be working to try to ensure more expensive apps to make sure that developers have time to make good apps (and to try to force differentiation); but not just for their own profit.

app store profit margin is vital

According to you (and I believe you), 8% of Apple's total profit comes from the app store. That's not even close to "irrelevant" to shareholders.

And look here. Again, according to the figures you gave:

Apple's app store profit margin is 30% on sales of $10B.

Let's suppose that Apple found a way to raise its profit margin on apps to 34% instead of 30%.

That would be $400M in extra profit for the company. The company's overall profit would rise by 1%.

Got that? Profit from apps are about 8% of the business. If the profit margin on apps rises are falls by X%, the company's overall profit rises or falls by about (X/3)%. This is a big deal.

Somewhere in Apple there are commodities experts who are sweating the pennies on every app sale and who are really determined to try to hit the sweet spot on the price v. demand curve consistent with the constraint of not alienating developers too badly.

I agree there are analysts

The reason I said it is irrelevant is that you're talking about a 1% increase in profit - for something that present a big risk to the 99%.

The App Store is a big selling point. Increasing the quality of the apps there is important to Apple; it drives the exclusive image of the iPhone/iPad, which again sells to customers.

If they can grab 1% more profit without risking that, I'm sure they'd do it - $400M is nothing to sneeze at. And I'm sure there are analysts looking for these opportunities.

However, Apple will not grab 1% more profit there if it place any substantial risk to their core business; and my argument is that anything that makes the end user experience substantially worse is a substantial risk to the core business, and anything that alienates developers (except to benefit end users) is a substantial risk to core business.

"anything that alienates developers"

¿like all the cockamaimy hoops one has to jump through with certificates and profiles.? ha ha oh well. (vs., say, android.)

Wow is most of this wrong

First off most of this is wrong. Start with point (4). Apple's goal is to sell Apple hardware. Apple is thrilled by good quality $0 applications because good quality free apps sell phones and tablets. Until fairly recently Apple's App store was run at a loss. The cost of vetting the applications and managing the developers exceeded the 30% of the total revenues from application sales.

Now over the long haul Apple is well aware that their 30% commission on software might become the real profit center for phones. Going on a price war, dropping the price of their phones and sending their software up by a factor of say 4x is a card they have in their deck but they haven't played it yet.

Apple's charges for developer kits, non-standard APIs, and non-standard languages all raise the cost barrier to entry for new apps.

Apple charges for developer kits well below what it costs them to support developers. $99 is a token amount but a token amount large enough that people don't want to lose their developer ID for misconduct. Its enough for them to do the basic checks to do a small identify verification. That's it. It certainly is not a revenue source. There are 6 million Apple developers in total, most of whom are using the free tools. If we assume 1/3rd are paying $99 (a very high estimate BTW) that would be .1% of revenues, about 10% of iPods which Apple even considers a dying product.

Non-stanard APIs OTOH are why people buy Apple products. Those APIs tie into the features of their OS and their hardware that is the whole reason their products exist. They don't want standard APIs because they oppose the lowest common denominator approach that standard APIs force. If Apple users were forced to use software that worked well on crappy hardware and OSes the entire Apple experience would be gone. The reason people pay 2.7x as much for the average Apple laptop as the average Windows laptop is because they enjoy the experience.

I'm sitting here enjoying the wonderful fonts at 220 PPI. Its an enjoyable experience because every single piece of software I run is tested at that resolution to render properly. My battery lasts an extra 20-30 minutes because almost all the applications I'm running are designed to function well with coalescing. Whenever I have to run any piece of software that wasn't written by developers enmeshed in Apple culture I'm immediately reminded of the hundreds of wonderful APIs that Apple makes available as what are normally (for Apple users) automated simple operations that happen in a completely instinctual way become complex operations that require focus, "seriously Windows doesn't have that built in by now we've had that for 10 years".

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of platform enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms. -- Steve Jobs

As for non-standard languages... non-standard languages keep developers who aren't going to grok Apple culture off the platform. The goal isn't to raise costs, but to drive commitment. Behavior changes belief. Apple wants there to exist "Apple developers" and for end users to want software made by "Apple developers".

This isn't about cost in the sense above.

Hurts Enterprise Products

This is hurting application development. Companies with significant investment in the Apple experience have enterprise apps that specifically target iOS devices. Enterprise customers say they don't want to be tied in to Apple, and have problems with the hardware cost. If the enterprise customer buys into the Apple brand experience it can work, but in general enterprises seem less influenced by Apple brand values than individuals. This leaves developers having to develop a second version of their apps. It would be so much better if developers could have a single source for all platforms that could use native features with an API to detect which features were available on any given platform. In the end developers are more interested in getting their app experience, which is their competitive advantage, consistent across platforms.

Enterprise products

@Keenan

This is hurting application development. Companies with significant investment in the Apple experience have enterprise apps that specifically target iOS devices. Enterprise customers say they don't want to be tied in to Apple, and have problems with the hardware cost.

Well first off I would comment that Apple is not an enterprise vendor, they are semi-hostile to the goals of enterprise IT. Arguably the entire BYOD movement which is what drove iPhone adoption in the enterprise was a reaction against the poor user experience that was a direct result of the policies that best fit enterprise IT's goals. The interests of the employees and the interests of the employers often conflict. Employers switched away from BlackBerry because employees were willing to keep the iOS "work phones" with them at all times and were often willing to pay for their own work phones and data plans. This shift created a value far exceeding the higher hardware costs (especially if the employer wasn't paying for the hardware). Apple, unlike the choice Microsoft made in the 1990s, is mostly siding with the employees over the employers in how they target their platform. But the effects of that choice is often quite a bit more beneficial to employers than having their immediate wants met.

So when you say "enterprises want X" I think you need to be a bit more nuanced. The enterprise has conflicting and complex interests.

Secondly and more importantly I don't see any evidence at all that Apple's strategy is hurting enterprise applications development. Enterprise applications development on mobile is growing rapidly faster than the growth in handsets and as handset growth is leveling off we are seeing no leveling off in enterprise applications development spending.

If the enterprise customer buys into the Apple brand experience it can work, but in general enterprises seem less influenced by Apple brand values than individuals. This leaves developers having to develop a second version of their apps.

Well yes. And ultimately as Apple and Android pull further away from one another that cost and complexity is likely to increase. Which is going to lead to less porting, and thus further fragmentation of the user bases.

It would be so much better if developers could have a single source for all platforms that could use native features with an API to detect which features were available on any given platform. In the end developers are more interested in getting their app experience, which is their competitive advantage, consistent across platforms.

That's not true at all. Some app developers are interested in consistency, a lowest common denominator approach. Others are much more interested in best of breed. Microsoft's enterprise software offering most certainly take advantage of unique advantages of the Microsoft OS echosystem to lower administrative complexity and thereby TCO. iOS enterprise software offerings most certainly take advantage of the limited range of hardware and OSes to lower testing complexity and thus lower development costs for applications. In both cases the enterprise software benefits from inconsistency across platforms.

What you are asking for are cross platform mobile toolkits. HTML5 / phonegap exist. Enterprise software vendors who want consistency across platforms can have it. There reason they are writing in Cocoa though is they want performance more than they want consistency.

Doublethink

You equate too many things that are unconnected. For example "Some app developers are interested in consistency" and "a lowest common denominator approach" Implying that any interest in consistency is somehow inferior to a platform specific approach.

Common code can select platform features based on the available services on a platform. If the device has a 12MP camera, the photos captured are not going to be downgraded to 8MP. If you want a button the user can press, the button can be rendered by the OS on the device as it likes.

You say you see no evidence of this, yet here I am providing you with evidence and you are not listening. Its very easy not to see things with your eyes closed. Effectively your argument presupposes the outcome the vendors desire as if it were the only way things could be, ignoring the alternatives the vendors do not want.

Perhaps you can persuade some enterprise clients that they really should buy Apple hardware for all their employees so that they can run wonderful software that takes full advantage of the iOS experience? If you can't then the above opinion is just wishful thinking disconnected from the reality on the ground. Too many enterprises seem locked into Microsoft, and to exchange one vendor lock-in for another seems short-sighted (although I understand Apple's reasons for trying).

I might have more sympathy for your argument if iOS devices were clearly superior to the alternatives, but the WiFi network connectivity on iOS devices is seen as inferior in many enterprises due to a series of unfixable problems, which hurts the introduction of iOS devices, and gives corporate IT a point at which they can dig their heals in to oppose introduction. This brings us back to the corp IT people not wanting to be tied to a single hardware-platform (although they don't seem to mind being tied into a single software vendor like Microsoft).

Finally none of this is speculation, I work in this area,and have been out in the field actually talking to people and listening.

Doublethink

@Keenan

Finally none of this is speculation, I work in this area,and have been out in the field actually talking to people and listening.

I go to many of the major mobility conventions I don't see Fry-IT or you anywhere. Which Telcos know who you are? What Colos are buying from you or even know you? I'm meeting with Sprint at their NY headquarters on Wednesday about the new access points from Allied along Miami to Jacksonville, who should I ask for on your behalf? So don't try the "I'm in the industry and you are not". That may work with the academics it won't work with me. Because I'm really in the business and I know the others in the business. Looking at your site you are a bit player in health (socialist in the UK) and education.

And noI don't listen to what mid level executives say. I listen to what they are willing to pay for. Enterprise IT is lazy like everyone else and would like to do as little work as possibly supporting their user base. But that's not the whole enterprise. That unresponsive style of software construction they favor is why BYOD came in, and that's why they are getting pushed out by the business groups. The business groups are the enterprise and they don't like least common denominator.

Now if I were going to listen to opinions I'd be looking at the good quality mobility research that come from major research firms and business industry groups not the executives I happen to shoot the breeze with one day. I'm on the executive council for Nemertes and NJTC those are so-so on this (and still don't agree with you) but far better than just some guy's opinion. IBM, Infoweek, tend to be excellent. Michael Saylor of Microstrategy who writes many of the most expensive enterprise mobile applications with or for internal IT groups doesn't agree with you. And finally the FCC under Obama does a really good job collecting representative information in the public interests. And so if I gotta listen to someone I'm listening to them not some under ten man shop out of the UK.

So moving on from your attempt to take a superior tone.

You say you see no evidence of this, yet here I am providing you with evidence and you are not listening.

You haven't provided evidence. You have provided your opinion not backed by anything other than your claims to what people want. You want to provide evidence go ahead.

Common code can select platform features based on the available services on a platform. If the device has a 12MP camera, the photos captured are not going to be downgraded to 8MP. If you want a button the user can press, the button can be rendered by the OS on the device as it likes.

And that's a perfect of least common denominator thinking that all cameras are more or less the same except for the pixel count. And that's not the case. The physical camera on the HTC-One-M8 doesn't handle bright well so it is going to wash out colors. On the other hand the HTC-One-M8 is going to produce much better depth of field than the iPhone 5S. Or if it is Galaxy 5S I can do better in low light conditions than either of the other two. If you actually care about doing a good job, depending on the problem domain you have to compensate for the actual specific physical issues with each and every single choice that each and every single manufacturer made when they built their phone. You can either do that by writing good camera software or using a platform / device specific photo library. But in no case is least-common-denominator going to cut it. That if it is going to do much of anything is just going to produce bad photos and from there other problems.

Perhaps you can persuade some enterprise clients that they really should buy Apple hardware for all their employees so that they can run wonderful software that takes full advantage of the iOS experience?

Enterprise clients realize huge saving from their employees being willing to buy their own Apple hardware to run the business applications on. And moreover because it is their own hardware be more careful and deal with the higher replacement / repair costs. That's where the saving is. That's been the whole idea of BYOD. For those enterprises that do have to buy the phone, the average enterprise employee cellular plan (minutes and data) in 2013 was $1044 / yr. The difference in price between Android and Apple is month or so. It is not a huge ticket item on the fully loaded cost of an employee.

This brings us back to the corp IT people not wanting to be tied to a single hardware-platform

Of course all things being equal enterprise IT would rather not be tied to a single hardware platform. They would much rather be able to nickel and dime the hardware vendors. So what? Apple wants to tie them to a hardware platform. Why is it in Apple's interest to give them what they want? That's how this started. Then you switched to how the government should regulate to do this because there is some great societal interest which you can't really name.

As for Wifi stuff. I'm going to drop this. I'd like to find any CEO who says that his company can't implement a 8 - 9 figure change in his HR cost structure because his IT group doesn't want to fix their wifi for whatever issue you are talking about.

Shoot the messenger

You could at least get my name right :-)

Here I am offering genuine reports from the ground, probably similar to many other SMEs (remember about 50% of GDP comes from SMEs), and the response instead of addressing the problem is to try and attack the credibility of the reporter. I am posting my personal views, not that of the company, so I don't feel comfortable discussing clients here.

Your comments about the camera are completely backwards. I can provide an abstract interface to the camera, and the photos captured will vary in quality independently of my software. In this way each platform can differentiate on camera quality, and their user-interface for photography. All I care about is getting the captured image as a jpeg file. Android's Intents are great for this. I just say "get me a photo", the vendor specific software supplied on the device by the manufacturer takes over (offering the user whatever superior experience) and returns me a string pointing to the captured image file (and if the hardware or software is superior, so will the quality of image in the file).

Here's some comments about the wifi: http://www.net.princeton.edu/announcements/ipad-iphoneos32-stops-renewing-lease-keeps-using-IP-address.html
http://www.net.princeton.edu/apple-ios/ios40-requests-DHCP-too-often.html
http://www.net.princeton.edu/apple-ios/ios41-allows-lease-to-expire-keeps-using-IP-address.html
Note: the last one affects up to 6.1 (and maybe newer but they stopped testing with 6.1)

I think you are reading this slightly wrong. Ideally I think languages should be independent of vendors and thats a separate argument (that post was not in response to this one). On the whole I like Apple, and have lots of Apple kit. I am invested in Apple and want this to succeed, and would like nothing more than for enterprises (and I use that generically including both public and private sector) to go for Apple enterprise wide. It would help me a lot, and help sell enterprise products dependent on Apple devices. My real-world problem is the resistance to this, not with this as a concept (as opposed to any philosophical problem with vendor languages).

So are you offering to help, or is it all just talk? I will happily email you details confidentially if you can.

Shoot the messenger

@Keean

and the response instead of addressing the problem is to try and attack the credibility of the reporter.

The reporter was making some claims you are not backing down from. You were introducing your expertise as the key point of evidence. I'm happy to take this away from that realm but if you want to keep claiming that you are an expert based on your position then your position matters.

Your comments about the camera are completely backwards. I can provide an abstract interface to the camera, and the photos captured will vary in quality independently of my software. In this way each platform can differentiate on camera quality, and their user-interface for photography. All I care about is getting the captured image as a jpeg file.

Then if you just want lowest common denominator you don't have a problem. Something like:
<input type="file" accept="image/*;capture=camera"> and you can use HTML5. What you are asking for already exists and all the vendors support it.

My real-world problem is the resistance to this, not with this as a concept (as opposed to any philosophical problem with vendor languages). So are you offering to help, or is it all just talk? I will happily email you details confidentially if you can.

If you are talking about 250+ registered devices trying to decide on a mobile strategy and helping them put together a vendor plan. I do offer that service. If the enterprise is fine with an agency relationship the carriers, SaaS vendors... will pay my guys so I can do it free for the enterprise. And as just one of the many benefits of signing with an agent I'll offer to throw in ripping out their whatever-doesn't-work-wifi and replacing it with a new Cisco system (certified and Cisco maintenance for the life of the contract) which is tested for iOS at no charge as part of the agency agreement.

So if there are enterprises which don't know what they want, sure I'm happy to help. But so will hundreds of other vendors who would offer similar packages. So if this is really the holdup, sure I'm happy to do business. But I can't imagine that this was really the holdup. Because I'm sure their current agent (if they have one) would say the same thing.

Credibility

Regarding credibility, I am using my real name, and you could if you wanted google my linkedin profile, or look at my companies website, see the client list, which includes mobile operators like Orange/EE, Vodafone and O2, see:

http://www.fry-it.com/case-studies/orange-photo-sharing-platform

http://www.fry-it.com/case-studies/vodafone-mobile-camera-to-postcard-service

I have not made any unjustified claims about me or my business, and my credibility is based on publicly available information. I don't see you providing similar transparency.

I disagree with that approach being lowest-common-denominator. I have repeatedly explained why this is not a lowest-common-denominator approach. Either you are being deliberately misleading, or genuinely don't understand. Giving you the benefit of the doubt, i will explain lowest-common-denominator:

LCD is where you code to the 'lowest' specification of all the platforms you want to support. For example you have two devices one with a 2MP camera, and one with an 8MP camera. In a LCD approach you would have to only use 2MP on both devices (the lowest of the two specifications).

If you take a generic approach (handle images as JPEGs) then you do not care what resolution the camera captures, the image will be the highest resolution the camera can capture. If you take a specialised approach you have different software for each platform that takes advantage of unique features, and integrates with the rest of the application via an API. Using these two techniques together you can write software that is not lowest-common-denominator, but takes advantage of the unique features of all supported platforms.

What you are missing is that currently people write two completely separate implementations (say Java for Android, and Objective-C for iOS), so any common code that would be extracted would be an advantage for the developers bottom-line. If the same language could be used on both platforms most of the core business-logic could be shared with just platform specific adapters for each phone. The phones actually share more than you would think underneath, for example both use the Sqlite database, are based on unix derived operating systems, both use the BSD network stack (with the same API) etc. None of this in any way reduces the ability of the developer to exploit the unique features of each platform.

As it happens there are many WiFi companies that don't seem to be able to supply wifi that works with large numbers of people. I have been to conferences where you just cant get a wifi connection when all the delegates come into the exhibition hall. However my point was more that its fine for you to tell me all this, but when a client does not want to be locked into Apple kit and that is the blocking point on a sale, all this fine talk and good intention counts for nothing. I am reporting this not because I have a problem with iOS networking, but because this has been raised by clients as an objection to buying an iOS only application. The best solution would be if everyone wanted to buy iOS devices, then I only have to develop a single solution. Given that that is not happening, then I have to develop multiple versions, in which case I would like to re-use as much code as possible between platforms, whilst still using as many of the unique features of each platform as I can. I need to do this because I know users of each platform do not like to feel like they are getting a second-rate service but want to feel there is perfect integration between their platform and my application. Bearing all this in mind there is still a significant chunk of common stuff that can be cross-platform, and that would be significantly helped by a single language.

So can we agree I have some relevant real world experience? Can we agree on the definition of lowest-common-denominator? Do you have any practical advice to help the bottom line of B2B developers dealing with fragmentation in the mobile market?

@ Keean The examples you

@ Keean

The examples you give prove some connection with the mobility marketing or something like that. Not much to do with mobility other than operators were using them as a gimmick nor are they bought small business nor are enterprise apps at all.... They do justify your claim to having worked with photography but certainly at a lowest common denominator level. You are just pulling a picture, you aren't using any platform specific advantages.

I have not made any unjustified claims about me or my business

You claimed to be knowledgeable about the mobility business because of your ties to it. So that's why I started asking who knows you and why I've never heard of you. Me I'm one the scavengers that runs a business eating the scraps the real players aren't interested in. I keep my people employed but that's it. My level of expertise is listening to the people who have the resources and incentives to do this kind of research properly. Which if we are going to talk is where this should start. Not your buddy in IT who heard about a wifi problem in iOS 3-6 and thus doesn't want to buy Apple.

I disagree with that approach being lowest-common-denominator. I have repeatedly explained why this is not a lowest-common-denominator approach. Either you are being deliberately misleading, or genuinely don't understand.

Or maybe another option is I understand what you are saying and disagree with you since in particular we are talking Apple and Apple uses lowest-common-denominator to mean not taking advantage of Cocoa specific features. Incidentally Samsung uses lowest-common-denominator to mean not only not taking advantage of Android specific features but also Samsung specific features like Knox.

As it happens there are many WiFi companies that don't seem to be able to supply wifi that works with large numbers of people. I have been to conferences where you just cant get a wifi connection when all the delegates come into the exhibition hall.

I haven't met a single vendor who isn't able to supply wifi that can work with large number of people. Apple incidentally is a member of the enterprise wifi alliance as is Samsung. None of the vendors believe their hardware is the problem. Aberdeen which has actually studied large number of public deployments and produced best practices documentation doesn't believe Apple is the problem. Cisco whose deployed to stadiums offering wifi to tens of thousands doesn't believe Apple is the problem.

Now what I have seen are lots of places that do wifi as an afterthought: they don't apply best practices, they don't hire network engineers to build it, they don't keep their systems updated and consequently their systems suck.

So can we agree I have some relevant real world experience?

I agree you have relevant real world experience as a developer of mobile applications. When it comes to having any idea what's in consumers or producers best interests, no we don't agree. You haven't spent the time to realize that your interests and their interests are not the same things. You just assume what's best for you is what's best for them.

Can we agree on the definition of lowest-common-denominator?

Nope. I'm going with the big player's definition. Besides the appeal to authority I happen to think they are right. You start by engineering the application to the culture of your end user base. Those different platforms exist because the end users have different preferences. Applications should not in general be seamless across user communities rather they should be tailored to the needs and desires of each one. Where it is possible to provide a lowest common denominator approach, certainly there is no problem with that.

But the reason developers are having to write more applications versions is because the users benefit from having platforms that fit their needs.

Do you have any practical advice to help the bottom line of B2B developers dealing with fragmentation in the mobile market?

I think I've given it multiple times in this thread already.

Unproductive.

Did you read the other case studies on our website? In any case the question is not my absolute experience but how it compares to yours. You have provided nothing to support your opinion, I have provided plenty. So I leave it up to the reader to decide who has provided the most support for their opinion.

As for my buddy in IT, I don't know where you get that from, I clearly said it was a client, it was their head if IT. I am not going to name who they are, and I don't believe you would reasonably expect me to. I don't know what your aim is here?

I think you have to separate user visible features from developer convenience when talking about lowest common denominator. If we can't even agree something simple like that I don't think there is going to be much common ground in order to continue the discussion. I really don't think the app user cares whether it uses Cocoa or not, as long at the experience conforms to their expectations of the platform.

I don't think my best interest and the clients are the same, so I have obviously failed to express myself well enough. My best interest is to develop solely on apple using apples specific tools and ecosystem. My clients tell me they want support for systems other than apple.

In summary it appears your argument is that you cannot write a cross platform app that is not lowest-common-denominator. You then justify that by defining lowest-common-denominator to be exactly what you need it to be to support your argument. This is a tautology. I think to get any agreement we need to reframe the argument without reference to lowest-common-denominator as we disagree on its definition.

How about: It is possible to write cross-platform applications that offer as good user experience as single-platform applications, if sufficient effort is spent in writing specialised sections of code for each platform, although large amounts of the business-logic may be shared between all platforms. Can you agree to that?

re "wow"

Sorry, but no.

First things first. You say:

First off most of this is wrong. Start with point (4). Apple's goal is to sell Apple hardware. Apple is thrilled by good quality $0 applications because good quality free apps sell phones and tablets. Until fairly recently Apple's App store was run at a loss. The cost of vetting the applications and managing the developers exceeded the 30% of the total revenues from application sales.

It's not so simple. As Apple's HW share declines and as their products move inevitably to lower costs (hence lower margins) the "services" component of the business grows in relative importance. Apple is working hard to accelarate that growth.

Have a glance here for one Morgan Stanley analyst's take: appleinsider.com

Specifically, she noted that the iTunes Store is estimated to have about 15 percent operating margin, while the App Store has much higher estimated margins around 46 percent. She believes that the growth of the App Store and its strong margins could add 10 basis points to Apple's total company-wide operating margins for calendar 2014.

As for this:

Apple charges for developer kits [...] is not a revenue source.

(The elided context makes it clear you mean "profit", not "revenue".)

Can you show me where I said otherwise?

Non-stanard APIs OTOH are why people buy Apple products. [....]The reason people pay 2.7x as much for the average Apple laptop as the average Windows laptop is because they enjoy the experience.

I would guess that the buying decisions are more diverse and complicated than that but it doesn't matter either way to the arguments I've previously made.

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform.

Yes, Apple likes to tell its third party developers that the make-work assigned to them is Very Important to something vaguely called the "user experience". I'm sure there are true believers, too.

In a free market some of those so-called "sub-standard" apps that Apple keeps at bay might deliver greater use value but, surely, they would do less than "standard" apps to further Apple's branding strategy.

Since developers gain nothing by furthering Apple's branding strategy Apple pretty much has to mystify the make-work requirements by promoting brand fetishism.

The goal isn't to raise costs, but to drive commitment.

I guess in your view Apple is in it for love and passion, money be damned.

Wow thread

@Thomas

It's not so simple. As Apple's HW share declines and as their products move inevitably to lower costs (hence lower margins) the "services" component of the business grows in relative importance.

What evidence is there that Apple's HW share declines? As for moving inevitably to lower costs computers are an instructive advantage. In the last decade the relative price difference between an Apple laptop and a Windows laptop has gone from 1.4x to 2.7x. That's a relative increase in costs. On desktops it is even more extreme. It has been 170 days since the MacPro was released. Apple is still running 5 days behind on orders, that is they are still selling every MacPro they can make before they finish making them. And that's for a desktop computer whose lowest price model starts at $3k.

People will pay for quality. The lowest cost producers don't always win. When I look at the road I see cars other than the Kia Rio, Nissan Versa and the Hyundai Accent. I don't know that it is inevitable that Apple will chase lower prices. It is entirely possible that for many years Apple is able to maintain their margins by avoiding commodification.

As for services (I'm assuming you mean online services) Apple traditionally bundles the services in (mostly) with the the cost of the hardware. I can easily see their service costs rising as they offer more free services as a differentiator but that's not going to be a factor in them trying to make money from application sales.

The elided context makes it clear you mean "profit", not "revenue".

I meant both. Assuming extremely high numbers it represents .1% of Apple revenues. And that .1% of Apple's revenues doesn't cover the costs of providing the services associated with those revenues.

Yes, Apple likes to tell its third party developers that the make-work assigned to them is Very Important to something vaguely called the "user experience". I'm sure there are true believers, too.

The true believers are the public who are willing to pay substantially higher costs for those user experiences and consistently express higher levels of satisfaction. Again on computers recent numbers were 12% YoY growth on Mac sales in an environment of 5% contraction and that's for a much more expensive product. The reason for that is high level of user satisfaction. Focusing on a better user experience is not "make work" it is what developers should be doing.

In a free market some of those so-called "sub-standard" apps that Apple keeps at bay might deliver greater use value but, surely, they would do less than "standard" apps to further Apple's branding strategy.

It isn't just branding. Apple doesn't dispute that there are costs especially in the short term. But Apple and Apple's user base understands that there is an element of tragedy of the commons here. A sub-standard applications that takes hold can often still effectively via. incumbency effects or network effects prevent better alternatives from emerging. As a platform complex tradeoffs have to be made when it comes to services undermining the unified total experience of the user on the device via. lowest-common denominator approaches vs. the short term harms caused by not allowing something cross platform to function.

In the case of Google / Android and Microsoft the choice is clearly in the direction of lowest common denominator in all but the most extreme cases. In the case of say Cisco, Avaya or Apple the choice is clearly in the direction of careful regulation evan at the cost of losing access to some lowest common denominator applications. I trust Cisco to act in the best interests of their platform even when that means I don't get to run a particular piece of software. In the case of Apple the success of their approach by almost measure whether it be satisfaction, average number of hours of usage per year, cost per device... the numbers are so extreme and the numbers so clear there isn't any doubt.

Since developers gain nothing by furthering Apple's branding strategy

What developers gain by further Apple's branding strategy is a highly satisfied customer base willing to pay vastly more per user for software. For April 2014

Apple: $US870m software sales, $51.1m per percentage of marketshare
Google play: $US530m, $6.4m per percentage of marketshare

Having a customer base willing to buy stuff is a huge advantage to people selling stuff.

I guess in your view Apple is in it for love and passion, money be damned.

No. I think Apple uses love and passion to make money. They accomplish both.

Not going there.

What evidence is there that Apple's HW share declines?

It's been headline news for a long time.

If it was headline news,

If it was headline news, wouldn't pulling out one citation be easy?

point of order

If it was headline news, wouldn't pulling out one citation be easy?

Yes. That's one of the reasons I'm not interested in giving JeffB more extensive uptake. What is your point?

conspicuous consumption

"Hey Google, you have your languages, well here's ours!" ?

Pissing contest bit too mundane

I you evaluate Go you end up with the observation that it's primarily a well engineered datacentre language. It might even replace C as a language of choice for web servers, or replace bash, but I doubt it.

Even at Apple they will have recognized that. Go is no real competition for them; Javascript makes more sense. But some managers might have the feeling that they should move into the direction of developing their own language extrapolating from Google's strategy. Well, those managers would be right even if they wouldn't understand the argument fully.

Tower of Babel

So every vendor has their own language, and nothing is portable from one platform to another. Any dream of code reuse and a common repository of algorithms is gone... this surely is programming's worst nightmare.

Yet we understand that the semantics of programming is all the same even if the syntax is different. This is where mathematics has it right, although there may be competing notations initially for things like differential calculus, over time the community comes to a consensus and moves towards a standard notation. This is why I think the Haskell way is right.

Edit: I am talking about the process by which the language was created and standardised when saying the Haskell way is right, not the language itself.

Newspeak

Tower of Babel

I find Haskell a dead-on-arrival language. I neither believe in laziness or purity and I certainly don't agree with monadic abstractions to enable programming in the large. Neither do I believe in type classes as a software engineering principle.

That is not to say that I don't like lazy pure programming; I just don't think you can define a GPL on top of it. Neither do I mind research or using it as a teaching tool.

We're repeating arguments. I think Haskell will find a niche like, but bigger than, Prolog. And it'll remain a nice research language.

I am not proposing a Tower of Babel. I am proposing, or rather supporting the current, rich ecosystem.

The semantics of programming is all the same even if the syntax is different

Energy, space, time. We already have a lingua franca which circumvents the Tower of Babel: it's called C. Yah. Haskell could replace C, or C++, at some point if it would throw away laziness and be able to recognize the concrete machine and you would be able to inspect the concrete representation of things and use bit twiddling on that to implement very fast hash functions. Maybe they'll add that, or did that, maybe they'll throw away the monadic abstractions, maybe they'll do other stuff. Since they want a GPL they're constantly forced in the direction of the concrete machine anyway.

Whatever. Everybody is constantly evaluating everything and my bet is that most people come to the same conclusion over time. If I am right, we'll see a constant proliferation of languages over the coming years trying to approximate their local optima.

Let's fight it out in another ten years.

(Why are you implementing your stuff in C++ anyway?)

(Hah! Bellyfeel, Blackwhite, Crimethink, Duckspeak, Goodsex, Ownlife, and Unperson. Wonder what I am (not) guilty of? Probably everything.)

I Like Haskell (but not exclusively)

I like Haskell, but that does not mean I don't like other languages, and I am not arguing for Haskell as a language (as this thread is about the politics) I am arguing for the way Haskell as a language was created, and supporting the reason for it. I am not suggesting there should be a single language, and I like domain specific languages. There is room for both C++ and Haskell. I am not sure we need both Haskell and ML though ;-) I generally prefer Ada to C++, however I have had (brief) dealings with the standards committees of both C++ and Ada, and neither were interested in the kind of generic programming I wanted. C++ won't break backwards compatibility (was to do with overloading and specialisation), and Ada seems to be focused on getting more OO features and seems uninterested in improving support for generics (was to do with generic dependencies). I have since found workarounds for both these problems.

In answer to your question regarding why C++, because it has the widest compiler support on the most platforms (other than C), and there are no license restrictions (the free Ada compilers are all restricted in one way or another) and because programming in C without polymorphism and generic containers (like vector, set, map etc) is too painful. Using templates I can get Haskell type-class and polymorphic style behaviour, and using inheritance and virtual functions I can get algebraic datatypes.

Your comments are very strange

I find Haskell a dead-on-arrival language. I neither believe in laziness or purity and I certainly don't agree with monadic abstractions to enable programming in the large. Neither do I believe in type classes as a software engineering principle.

I think laziness is a mistake, too, but I have to say I find your rhetorical strategy utterly bizarre. You can't sensibly say Haskell won't work for high performance computing, when there are numerous people building high-performance systems in Haskell. For example, McNettle, the world's fastest SDN implementation, is written in Haskell.

World's fastest X often

World's fastest X often depends on the competition more than it does in qualities of the language. Is Haskell seriously being pushed for high performance with benchmarks to back that up?

In this case, seems to be yes

If memory serves, the McNettle work appeared in SIGCOMM, where they surely do not care about Haskell, but do care about SDN.

The key point is that an awful lot of effort has been poured into making GHC scale out to a lot of cores, with a lot of work especially on removing concurrency bottlenecks in both GC and IO.

Energy Bill

It may pan out for GHC but taking on Go where that is primarily engineered as a closer to the machine language partly aimed at keeping Google's energy bill low might, in the long run, mean they'll lose.

The only thing going for them is the observation that if you keep processes pure you can optimize better for multicore at the cost of copying sometimes.

But, my bet, they don't use laziness but make everything strict, the programs are too small to be bitten by the monadic abstraction, instead of abstract data structures they are forced to C-like bit level representations somewhat peculiar and slow to handle in an FP style, the garbage collector needs fine tuning and they hardly use (lazy) higher-order programming in this setting.

So it works out. But they have too much against them since all of the above is better, or doesn't need to be, handled in Go or C at the cost of more development time. (Or possibly, the cost of rather silly engineers getting the abstractions wrong since they have more choices.)

Maybe Go doesn't get GC or data migration right in time, that would pan the contest in GHCs favor. But the rest is against them.

Higher-Order Function Migration

The Clean group, around twenty years ago, experimented with seamlessly migrating higher order functions, computations, and data around clusters.

Somewhere, for research purposes, that would make more sense since that might enable new distributed computational models beyond stream processing data.

Though I've got no idea why you'ld want to migrate a hof but they're smart enough to think of something I guess. Administration and deployment could be interesting though.

occam

The occam-π crowd (part of the old Transputer community) also experimented with "mobile processes" that allowed computations to be migrated. I think they found some uses for it (see http://www.cs.kent.ac.uk/research/groups/plas/projects/cosmos.html, although I can't say whether there was a genuinely compelling use case or "killer app".

I think relational databases

I think relational databases are a pretty good killer app. Being able to write a local query and have it be migrated closer to the data is compelling. SQL is just a restricted version of this general model.

Indeed they are

Your comments are very strange

Yah. As an exercise in humor I do try to present my arguments diametrically opposed to academic peer-pressured narcism.

You can't sensibly say Haskell won't work for high performance computing

I never made that claim. But, now you mention it, I am willing to support it.

numerous people building high-performance systems in Haskell

Reversal of an implication. Applying A to B doesn't imply that A is the better language. It's interesting as a research experiment but, yah, they're probably building a server despite the language. The higher abstraction probably buys them development time, which is something too. But the same design in Go, or C, likely beats them.

McNettle, the world's fastest SDN implementation, is written in Haskell

Cherry-picking. As I said, it's an okay language for academic exercises.

Apple is notorious for pulling rugs out from under developers

Processor changes from 68k to PowerPC to Intel.

The migration from the pre-Darwin MacOS to OSX.

These changes were beneficial to end users, and probably good for developers in the long run, but in the short term all very disruptive.

(MS does these sorts of things to, though less drastically).

ObjC to Swift could easily be a similar thing; especially since Swift is a new language that few developers have experience with. ObjC, for all its warts, did have a strong developer community, even if it was nowadays only used for two platforms (iOS and Mac).

ObjC replacement

Could also be that they decided that in the long run they need to ditch ObjC (in MacOS).

Pushing on other languages

Already things like haxe are talking about how they should better support non-nullable types, etc. Which is good and bad. Bad because it pisses me off that people only talk about it after they think Apple validated it or something.

Conclusions

1. We managed to have a fairly civil discussion on hot button issues.

2. We, as a community, don't have a clear, shared and well-founded picture of the pressures and incentives of language design in industry.

A million vague motives

You're not going to abstract over a million vague motives of thousands of people that well anyway.

There are a lot of notions not mentioned yet in this thread which haven't been touched upon, like money and programmers aptitude, motivation and skills.

Almost everything is vague which is a good thing for my argument: We need a rich ecosystem (and let Darwinism take care of the rest.)

(Not that I have to do a lot about it. A million vague notions will make sure that there is a rich ecosystem anyway.)

Well...

1) This community (in which I drop in from time-to-time) has consistently-enforced policies and standards which keep flamewars from spreading too far.

2) I'm not sure anybody does; other than to not that many of the things that motivate the academic posters here (many of whom are grad students or researchers in PLT), are Greek to most industrial programmers. Most of paid programming isn't about solving deep technical problems, but of gluing stuff together.

At any rate--industrial programming languages seldom come from dedicated language shops. Out of all the post-Java "app" languages (languages targeted toward large high-performance apps such as games, as opposed to things like business logic, websites, numerics, enterprise glue, or other such things), I tend to like Walter Bright's D the best. Despite it's technical strengths (IMHO), it doesn't see much adaption.

Most successful (in terms of use) PLs come from system vendors. A few come bundled with applications, and occasionally escape the original purpose to be used elsewhere. The open source community has done well producing scripting languages, far less well producing languages requiring a more complete toolchain and stack. And all major computing environments support C/C++ as a fallback, with Windows and Unix derivatives comprosing the surviving operating systems.

conclusions

3. highly nested comments do not seem to dissuade some kinds of discussions, even when 90% of horizontal space is indentation

stateless reasoning

Sort of popping up a response to that thread. This is a subjective interpretation of programming history:

I think the classical approach had an idea of building programs as pretty straightforward state machines. One would never write, in this style, a side-effectful program bigger than one's head. A program of 20K lines that is one big, hairy, state machine is probably too big.

You would use static reasoning to reason about your state machine (such as, e.g., making time or space complexity arguments; or establishing termination under certain conditions).

If you need something bigger,you look for ways to compose pieces of that sort. There's a constraint on what kinds of compositions are allowed. Ideally, the composition glue must (also) be simple state machine programs no bigger than your head. And there's an independence requirement that the composition glue doesn't in any way alter the behavior of the subsumed state machines. The subsumed machines know nothing of each other. They only know what they knew to begin with: that they have (simple) internal state, input, output, and transition rules.

The unix pipeline style is a particular example of this pattern but by no means the only one. Composed machines don't necessarily have to be separate processes. Interfaces must be based on some abstract sense of state machine i/o but they don't have to be byte streams. Compositions don't have to be linear.

In my personal experience I associate the loss of this good style with the time period when OO started to become popular and when one team after another copy-cat re-implemented essentially spaghetti-code "GUI" frameworks and apps.

Which brings us to the modern web browser....

thread won't die

even when people point out that swift has no clothes (or at least skimpy stuff) i figure it won't be enough to make people yearn for a REAL language.

so Swift will do more harm than good because: (1) apple's obj-c environment was so bad anything looks better; (2) the fan boys will assume it is god's gift; (3) love it or leave your job since in 5 years everything done on ios will be swift by default; (4) so lots of people's brains will be poisoned into thinking this is what Good Languages look like.