how many lines of code can civilization support?

Is there any programming language theory that considers absolute upper bounds on how many lines of code (or bytes or whatever) a civilization can actively maintain (as a function of population size)?

Are such upper bounds low enough that the limit should inform programming language design?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Conway's Law

There is Conway's Law, which implies that structure of code will follow structure of human communication.

I would imagine that the limits for human-maintained code are aligned with the limits of human communication. These limits seem very high, especially to the extent that we express content through code (e.g. spreadsheets, POV-Ray scenes, video games, CGI in movies, live or active documents, Wolfram tweet-a-program).

I think most of our programming languages and programming models don't scale effectively to anywhere near civilization's limits. The barriers to writing, modifying, discovering, and reusing code are much too high. I can't even live code with pen and paper, yet, at least not without a lot of painful and fragile setup.

Of related concern is the size of the 'stack' of dependencies and relationships that a programmer might need to grok, the distance between behavior and bedrock. If our implicit requirements for "support" include robustness or security then we've arguably surpassed our limits here. But lightweight VMs and unikernels might help shrink our stacks. I've developed a purely functional bytecode and (recently) purely functional abstract virtual machines as a more robust and comfortable bedrock.

re Conway's Law

Thanks, by the way. I had never heard about that and I think it is interesting and relevant to the question I asked.

when will the singularity arrive?

Code written by hand or code written by machines? We already know there are problems that we cannot solve directly with code (speech recognition, self-driving cars) but can solve via machine learning.

The real question is how long until we can just create something that learns it all. It is only a matter of decades, probably. Then code will be obsolete, the interface with the computer will be totally complete.

Does it matter?

Does that matter much? If the "singularity" ever arrives, "civilization" will probably be over soon after, so the question becomes irrelevant.

There are intermediate

There are intermediate points, hopefully, where our speech recognizers, dialogue systems, and self-driving cars actually work for us for awhile before they take over.

But to answer the original question more explicitly: we've already reached a point where we know some code is impossible to write even if it can exist through training on data sets. You can't program a self driving car in Haskell.

The concept of "singularity"

The concept of "singularity" is overrated; the point, as I understand it, is that you can't project existing trends beyond a certain point because they can't continue without being replaced by different trends, and the way that happens can't be anticipated. Like the case I recall from an undergrad history class, of a period (what was it? a centry or two?) during the European Dark Ages when the number of water mills in Europe was increasing exponentially; evidently that trend didn't continue indefinitely since the entire volume of Europe hasn't been filled with water mills, but that doesn't mean everything about Europe changed after that. There's an important difference between "unanticipable" and "totally different".

See also xkcd 3576.

I wouldn't trust a self-driving car working from training on data sets via any technology I've ever heard of. I certainly wouldn't trust any training-set-based technology for pretty much any of the things that Haskell is used for. Nor SQL. Nor JavaScript. Because determinism, while not right for everything, is extraordinarily valuable for a very wide range of purposes. I also see no reason to suppose the practical limitations of Haskell are necessarily inherent to all possible text-based PLs.

Just saying. Forecasting the future of technology is perilous.

mixed up comics

You linked to SMBC 3576

Fixed

Linked right, said wrong. Thanks.

So you are basically saying

So you are basically saying you won't trust a self driving car? Because we aren't getting them with programming. The logicians lost the AI battle a long time ago, to the point that they've all since escaped to PL.

You are probably already using and relying on technology that is machined learned, that we we could never get with "determinism." The reason the machine learning field has been so successful lately while the PL field has seriously stagnated is because the former are willing to let go and take risksm, the latter is limited too much by tradition and outdated thinking.

So you are basically saying

So you are basically saying you won't trust a self driving car?

I carefully didn't say that. I'd actually addressed this recently on my blog; wasn't going to link to it, but, here.

Blogspot is blocked in the

Blogspot is blocked in the PRC, I'll take a look at work tomorrow.

Logicians may not have lost.

I think its too early to call that one. I think logicians appeared to lose due to inadequate computer power. Its amazing what a few million times improvement in CPU speed and memory capacity can do.

The human brain appears to have a uniform learning algorithm, and this may be expressible in a form of logic.

The "few million times

The "few million times improvement in CPU speed and memory capacity" has benefits the ML people much more, to the point that we are seeing an acceleration in that work. The logicians trying to hand code world knowledge, in contrast, are getting much less done. Also see Rodney Brooks' Elephants don't play Chess:

http://people.csail.mit.edu/brooks/papers/elephants.pdf

Different but the same

Logic systems can learn, for example Prolog programs can dynamically add clauses which represents 'learning'. There is not as much difference between the ML approach (which is basically matrix maths) and logic. You can extract Bayesian decision trees from neural-networks, and these decision trees can be encoded in logic.

You can extract logic from

You can extract logic from learned neural networks, but it is so twisted and unmodular that it is something you know a human could never write themselves. And that is quite exciting: what is it with our modularity constructs that are so unnatural?

Brooks sort of gets at it with his reflex-based subsumption architecture for programming robots. But if you look at that, it is very emergent: rather than try to coordinate all behavior directly, it is better to coordinate them indirectly via prioritized reflexes.

Inconsistency Robust Direct Logic

It was only Classical Logic that was inadequate because a single inconsistency blows any classical theory.

However, Inconsistency Robust Direct Logic has overcome this limitation.

See Inconsistency Robust Direct

Self driving car

I am frankly in shock that, given the state of general AI and computer vision, such that computers are incapable of understanding the world let alone understanding what they're looking at, Google and other companies are going ahead with self driving cars. Yes, we finally have enough computer power to build a crude 3D model from what a camera or other sensors see, but there's a huge gap between having a model and knowing what's in the model and whether something in the model implies something unusual or dangerous going on.

I am also shocked at the reaction that self driving cars get from this generation, even in places like slashdot where at least some people might be aware of the limitations of current technology.

I was the only one there saying that this is a bad idea at the moment.

We actually had people, LOTS OF THEM, arguing that machines are better drivers than human beings using the usual argument about how machines are generally better at mechanical tasks than humans are, EVEN THOUGH NO MACHINES EXIST YET THAT ARE EVEN PASSABLE DRIVERS IN GENERAL CIRCUMSTANCES.

People ASSUME the superiority of machines, even when no such machine is yet possible. They seem unaware that "not possible yet" is an allowable state.

The other reaction is as if the self driving car is the next model of Apple product. They want it now.

Safety and superiority is just assumed. Therefor it's super cool and they want it.

The current generation of young adults just assumes that software is super competent and better than humans at all possible tasks.

Safety and superiority is

Safety and superiority is eventually assured. Where we are today is just a point before tomorrow. The technology is accelerating to such a point that it is inevitable. You just have to extrapolate; its not like we'll run out of improvements over what we have today and google will stop working on it.

Contrast that to PL, no one can imagine PLs 10 years from now will be much better than today, even with what Bret Victor showed us, people just assume...there will be a bit more functional programming and that is it. Depressing.

humbug

Safety and superiority is eventually assured. Where we are today is just a point before tomorrow. The technology is accelerating to such a point that it is inevitable.

That is handwaving. A mess of unsupported assertions.

You just have to extrapolate

No, you need to make an actual case ... not just say "by extrapolation".

its not like we'll run out of improvements over what we have today and google will stop working on it.

Why do you believe that Google's self driving car project would possibly be primarily driven by a plan to make self driving cars?

If a big firm like that is engaged in an obviouly bullsh*t line of research like that, the right question might be "what are they *really* up to?"

Your faith in Google somehow exceeds normal experience

"If a big firm like that is engaged in an obviouly bullsh*t line of research like that..."

Why in the world would you assume that they don't have someone in charge who is as stupid as they're acting?

re "someone in charge"

"Why in the world would you assume that they don't have someone in charge who ..."

Institutional decisions are not made by one person, typically.

Experience shows that authority, status

and peer pressure insures that large institutions are not always more responsible than individuals.

Google is aiming to be the

Google is aiming to be the GE for the next few decades, they find technology that they believe is ready to go with just a bit more development and go for it. There is no conspiracy, just a good eye for what is ready to be baked and a willingness to make it happen.

GE made

Fukushima.

"Just a bit more developement?!!!!!"

But that's my point. Self driving cars are not safe in any slightly unusual circumstance until machines have knowledge and perception as good as humans.

So that 80% 20% rule means that the last 20% of competence is impossible until we have full machine consciousness and total competence.

So that's not 80% of the work, that 99.999999% of the effort.

To claim otherwise is unbeliably irresponsible. Murderously irresponsible. What happens when a road cracks open in an earthquake? What happens when there are emus and people on the road around a turn? Does the machine know "hitting an emu is better than a person, it's better than going off the road" etc etc.

Machine perception is

Machine perception is getting pretty good, and will eventually surpass what is possible in humans. They already do much better than us in several dimensions, and the last pieces are far below full AI levels.

It is only "when", not "if", and "when" isn't that far away.

If you don't agree with this, I'm not sure what to say. If your argument is a technological one, what is the fundamental limitation of software that will prevent this from happening? We've already got rid of the programming aspect, so what else?

I'm saying that by emphasizing abilites that don't

yet exist, and won't in the near future, people like you are insuring that companies will be able to get self-driving vehicles out on the road decades before they're really capable of handling unusual circumstances.

Would it kill you to admit that they don't exist yet?

re "fundamental limitation"

"what is the fundamental limitation of software that will prevent this from happening? "

You haven't even expressed any clear and convincing definition for the "this" you are talking about.

Here, consider this for one thing: Google is not proposing to build "self driving" cars at all. Not in the slightest. They are proposing to build 100% remote controlled vehicles that can carry out a certain repertoire of maneuvers autonomously.

Wherever this proposed mode of transportation becomes dominant there is, concomitantly, an enomormous concentration of power to control the transportation system.

This is not really AI at all, except in the most superficial sense. Rather, this is a system with a small group of humans at the center micromanaging the transportation from computer screens, and a fleet of vehicles that are more or less exclusively controlled by that small group of humans.

Again, this is just fairly marginal, unsurprising, and quite limited improvements in the underlying technology.

There is no extrapolation of the autonomous features of this line of work to a generalized, safe, socially robust "car". For example:

How will a self-driving car react to traffic directions given by humans around a construction site? Around a chaotic disaster? A few years back large parts of the Berkeley and Oakland hills burned in a fire and cars were the essential, lifesaving evacuation tool. How will self-driving cars cope with rapidly changing road passability? To the abandonment of ordinary lane discipline since everyone is fleeing downhill? To the car-to-car and car-to-pedestrian real-time negotiations of which cars will pick up which needful walkers?

Obviously this car system from Google is not meant to be a real "self driving car" it is meant to be a new, centrally controlled, trackless mode of transportation.

Here you are talking about extrapolating the future of computation or languages or whatever and you already start off by not noting what the technology under consideration actually is.

This is not really AI at

This is not really AI at all, except in the most superficial sense. Rather, this is a system with a small group of humans at the center micromanaging the transportation from computer screens, and a fleet of vehicles that are more or less exclusively controlled by that small group of humans.

No. Where did you even get that idea from? There is no control room like there are for military drones. There might be some connection to google data servers, but definitely not "remote controlled."

Here you are talking about extrapolating the future of computation or languages or whatever and you already start off by not noting what the technology under consideration actually is.

I thought it was obvious, but obviously I was wrong. Self driving cars are really just what they say they are, directly spawning from the DARPA self-driving car challenge. There are no remote controls, no central control rooms, just cars that drive. I don't understand why that is so hard to accept as being a thing?

He was fooled into assuming that Google was doing something

useful, not something that's as irresponsible as I've been pointing out.

remote control

The proposal is for cars to rely on mapping, traffic info and more from central control. Not to mention software updates, performance feedback, and so on.

By the time the "smart city" people get through with this the cars will be tapped into municipal and law enforcement systems, too. Overrides for public safety, don't you know.

"Self driving cars are really just what they say they are, directly spawning from the DARPA self-driving car challenge"

In other words: cars with a very narrow range autonomous maneuvers meant to be typically deployed under centralized control.

"I don't understand why that is so hard to accept as being a thing? "

You think the Pentagon has been funding research in cars they don't control?

I think you are caught up on the drone analogy that bogusly implies a constant near real-time control of individual vehicles. That is not the only kind of control centralization that matters.

Centralized but autonomous?

Centralized but autonomous? Make up your mind! Just because the car relies on data doesn't mean its under central control, and the bogey man of central control from legal entities is more of a political rather than technological one.

The DARPA challenge explicitly disallowed any interaction with a central system whatsoever during the tests.

"centralized but autonomous"

Centralized but autonomous?

That is not what I said so you should re-read. Your comment is non-responsive.

You know, I always wondered

You know, I always wondered what the PL community thought would be there "self driving car" goal to drive us forward. Today I realized many of us don't even believe such things can or should exist, that grand challenges just isn't the PL community's thing. That is why Our papers are so boring and uninspiring.

In five years, remember what you told me about self driving cars, and see where the world is then. Otherwise, there is no point in continuing this topic further.

:/ programming languages have nothing to do

with releasing autonomous high speed robots onto a road system that was designed for human drivers. I certainly don't see why you think the Programing Language community should be excited by that - the design of these things fails to use programming anyway. What Programming Language can make sense of opaque neural network training? In a sense it's the opposite of a programming language.

As a programming language problem it just points out that you sacrifice safety at a software level too.

Traditionally, systems where people's lives depend on them are based on languages designed for error avoidance like Ada, but the only methods that are succeeding at sorta AI involve things like neural networks and can not be proven reliable or safe, the components are trained not programmed.

So once again, in order to reach the goal we end up with something that isn't engineered, it's discovered and one just sort of hopes it works and continues working.

Traditionally, systems where

Traditionally, systems where people's lives depend on them are based on languages designed for error avoidance like Ada

I don't think this is remotely true. It was maybe true for a short period of time. NASA writes their software in C. Avionics are often written in C/C++. Most modern software are "assured" by testing, not verification or whatever you mean by "engineering".

This is exactly how drive automation is being done.

Uhm so are we going to see accidents

because memory fragmentation made it run out, or because garbage collection too too long?

Does anyone want to have a auto-car written in C? Or Objective C?

Seems to me actual driving is

so far beyond the current reach of traditional programming or machine learning that it's not even safe to assume either technique is more suitable than the other. I suspect the technology that could reach it is something we haven't got yet. These technologies appear to me to be, essentially, the purely symbolic approach and the purely nonsymbolic approach; and as I've suggested elsehwere, I don't think sapience is purely one or the other.

Avionics

Avionics are often written in C/C++.

Airbus uses an industrial DSL (SCADE) for most of their life-critical code, AFAIK.

And Ada

Thanks for putting back a bit of actual technical discussion in this topic.

According to the AdaCore customer list, Ada also maintains a presence in critical systems. (They seem to also work on integration with Simulink/Stateflow, which would suggest that those same customers also use such DSLs, although maybe only in the design phase).

If a big firm like that is

If a big firm like that is engaged in an obviouly bullsh*t line of research like that, the right question might be "what are they *really* up to?"

Drivers can't pay attention to ads, people doing work or entertaining themselves in a self-driving car certainly could. Not to mention having better ad targeting once you add time, location and intended destination to Google's already impressive database of personal information, ie. you'll be passing McDonald's around noon on your way to this destination, would you like to stop there for lunch?

I'm not sure these goals are so "evil" as to offset virtually eliminating tens of thousands of deaths and millions of injuries caused by human drivers every eyar. Face it, you can try to make human licensing as strict as you want, but licensed humans will still never achieve the near-zero failure rate machine drivers have demonstrated.

Nobody denies that machines have failures, or that they aren't yet sophisticated enough to optimally handle all possible conditions. But humans are so tragically bad by comparison that even if machines could only handle driving 80% of the population, 80% of the time, you'd still save literally thousands of lives and prevent millions of injuries every year.

re "if a big firm"

Let me try to express the critique a little bit differently. It is not about safety per se or reacting to unusual conditions per se (accident scenes, construction, mudslides, hill fires...).

Those examples help to show that what Google proposes is not automated driving at all. Rather, it is reconfiguration of the transportation system.

Where heretofore the fleet of cars is not especially centrally controlled, Google (et al) propose to radically change this (incidentally creating a new locus of power over the whole of society).

Where heretofore the car network was managed for use by improvisational drivers deeply integrated into the social meaning of the areas they drive, Google (et al) propose to make the social and the improvised features *excluded* from the car network.

Now yes, incidentally, I do think that Google's proposal here is idiotic on its face but that is not my main point to you.

My main point here is that people are casually talking about AI somehow "solving" driving and even doing it much better than people -- but the real, material fact of the matter is that Google proposes to eliminate driving and to build a significantly different kind of transportation system.

An AI chess playing program does not "solve" chess by changing the rules of the game. What Google is doing is not an AI "solution" to driving -- it is solving a different problem and encouraging people to not recognize that the goalposts moved.

Google's popular face on this driving stuff is simply a lie. Here we have a forum of academics, professionals, and so on and for some reason several them are highly defensive of a position that denies the lie is taking place.

Programming is social. It is more or less a branch of civil engineering.

It does not seem so far fetched to me that within my lifetime the transportation system in some compact regions -- perhaps Manhattan or London as examples -- will come to be dominated by these robotic vehicles.

To make this work resiliently society will have to conjure up ample computing and network security to prevent the bricking of the entire fleet. Society will have to protect road conditions from human users to some standard needed to preserve the good safety and efficiency of the network.

The total social costs needed to make this trackless mode of personal robotic "cars" work are ultimately extremely high relative to direct labor costs of writing the code and collecting training sets.

I think it is analogous with all critical infrastructure code. The social costs "per line" or whatever are enormous. Society must produce a certain large amount of commodities just to keep the physical environment working in the ways that software requires.

Isn't there, for analytic purposes, a kind of notably low ratio between the total size of critical infrastructure code -- and the size of the population?

The engineering question I'm pondering is if engineers should regard society as a whole as having a slightly cramped budget for code.

We can't have a society in which, say, 10 billion lines of code are critical infrastructure. If society had only 10,000 lines of critical infrastructure code we could certainly afford to take good care of it.

Where in between is the line, taking a holistic view of the political economy, and is it small enough that programming language design should take it into account?

Yeah, it's not exactly teaching machines to drive

it's changing the definition of driving (at least in some places) so that what machines CAN do instead of driving will be accepted.

What I see in these arguments is that people want autonomous vehicles SO BADLY that they are willing to let every other definition slip in order to make that possible.

I don't know if I am arguing with people who have long commutes or people whose careers depend on autonomous vehicles or people who want to be able to get their kid shipped around town in a their car without leaving their bed... but for some reason these people are extremely motivated.

Since that seems (in my small sample of online nerds) to be a common reaction, then it's clear that we're about to have self driving cars almost immediately unless unless the actual technology is actually unusably horrible.

Because if people want this enough, they will change society, they'll change the definition of safety, they'll change anything to get it.

And that's not inherently wrong. But it is wrong to pretend that change is no change.

That's a pretty limited set of choices

I don't know if I am arguing with people who have long commutes or people whose careers depend on autonomous vehicles or people who want to be able to get their kid shipped around town in a their car without leaving their bed... but for some reason these people are extremely motivated.

There are also people whose primary interest in autonomous vehicles is to make travel safer -- people who've lost a child, friend or relative to a drunk driver, for example.

Where heretofore the fleet

Where heretofore the fleet of cars is not especially centrally controlled, Google (et al) propose to radically change this (incidentally creating a new locus of power over the whole of society).

I don't understand how you think Google's cars operate. The cars include LIDAR and various other local area sensors, and GPS for navigation. What exactly do you think is "centralized" in Google's approach?

To the larger point, please describe what you consider to be an AI that "solves driving", if you think Google's approach does not qualify.

i can see through rain

I grok both sides here.

Re: centralized vs. autonomous...

I believe everybody knows that there is autonomy involved in the second-to-second driving. The 'centralization' is that the machine gets large quantities of data, including updates/reconfigurations to the actual autonomous software, from some 'central' service. One can paint it as "just data" or "just updates", but (I should hope that) anybody who has any security experience, or anybody who has seen the film adaptation of Minority Report, should be able to see what the anti-centralizers are talking about: The autonomy of these devices is different than that of a human who might have a GPS but doesn't have anybody altering their wetware directly on the fly.

re: have you ever seen the rain?

Raould, you read well. Additionally, something very interesting in yesterday's SF Chronicle:

Someone ran an experiment having four of these cars follow each other through a city. Each car's sensors pick up the car in front and behind and so forth... but they were autonomous relative to one another.

The progress of the trailing car was so slow, so terrible, that the article remarks a human driver would likely have shut off the system and driven by hand.

Humans in the same situation perform better because they are naturally networked "socially" to drivers further ahead and behind. They can see and parse and give a human intention to not just the cars surrounding them, but to a larger chunk of the traffic scene.

To achieve similar performance, the article notes, the robot "cars" will have to "talk with one another".

To work well, that means the robots must be the dominant mode. The mode must operate networked like that. Establishing the mode means excluding the human social operation of the roadways.

When people start to toss off hypothetical casualty reductions for robot cars in cities, they should not count only the accident rate among cars. They should also try to guesstimate how much the security state apparatus must expand to keep the roads safe from having too many human drivers (or other human... uh... "intervention" in the flow of robots). The death counts should consider that security state apparatus as a source of death, even if the death takes a form other than collisions.

To achieve similar

To achieve similar performance, the article notes, the robot "cars" will have to "talk with one another".

To which article are you referring? Because nothing in the article I linked mentions the performance of the set of autonomous cars you describe.

As for achievable performance, engineered control systems react much more quickly than any human could possibly react. I don't believe for a second that an engineered system can't in principle achieve what humans do. I can believe that current or previous incarnations of such a system might overcompensate on speed controls for safety reasons during testing on live roads, but this isn't indicative of what a finished product will do, nor even what the current product could do.

I also don't see why higher speeds necessitates "each car talking to one another", thus yielding the security nightmare you foresee. Car to car communication would only strictly be needed for optimizing non-local traffic flows, ie. cars up ahead communicate a sudden obstruction to cars behind. You could certainly be perfectly happy with a suboptimal system without communication though.

re "To which article are you referring? "

Front page, San Francisco Chronicle, March 22, 2015.

". I don't believe for a second that an engineered system can't in principle achieve what humans do."

Of course, nobody has said otherwise. I don't see any serious reason why you would believe someone has said otherwise. Ahem.

Of course, nobody has said

Of course, nobody has said otherwise. I don't see any serious reason why you would believe someone has said otherwise.

I meant that an engineered system could do it without the networking you fear, because humans can do it without such networking.

can vs. does

I meant that an engineered system could do it without the networking you fear, because humans can do it without such networking.

The question before engineers is not what technology is in principle capable of, but what actually happens in society.

Will business decision makers aim to increase AI vision recognition and integration into human society to create truly robotic drivers? Or will they muscle their way through legislatures and other institutions to do the cheaper and more profitable route expropriating the roadways from the public?

Which do you think is closer to the future of this technology in its targeted regions of deployment? And why?

just because their motives may include evil

doesn't mean the cars won't be safer.

Or will they muscle their

Or will they muscle their way through legislatures and other institutions to do the cheaper and more profitable route expropriating the roadways from the public

I don't know what they'd do, but I'm not convinced that way's more profitable. Why do you think so?

Edit: for instance, your approach would require them to lobby in every jurisdiction in which they'd want to deploy, thus multiplying those costs, whereas the other option involves a one time upfront cost that solves the problem for every locale.

re "more profitable"

Or will they muscle their way through legislatures and other institutions to do the cheaper and more profitable route expropriating the roadways from the public

I don't know what they'd do, but I'm not convinced that way's more profitable. Why do you think so?

It is impossible for an AI to understand the social meaning of a dynamic visual scene, yet such understanding is often required for the act of driving.

The only way to spare a robotic car from the requirement to understand human society as a whole is to exclude social signalling from having any significant role in traffic -- to replace a social terrain of driving with a simplified, more machine-friendly terrain.

It is impossible for an AI

It is impossible for an AI to understand the social meaning of a dynamic visual scene, yet such understanding is often required for the act of driving.

Is social understanding truly required? The rules of the road are explicitly codified. Your speed and positioning would seem to be a function of the speed and position of co-moving objects around you bounded by the top speed given by the rules. The rules explicitly tell you what to do given obstructions in your path. I don't see where cues enter into this equation.

Can you name a specific example of a social cue that's required for driving?

There was already an example

There was already an example given of choosing to careen into a mob of emus instead of a crowd of school children. Did you need another example?

Yes, where is the proof

Yes, where is the proof that's a viable scenario given the rules of the road are properly followed? Otherwise that scenario is just a false dilemma.

re proof

given the rules of the road are properly followed?

Why should this be a "given" seeing as how they usually aren't in real life?

Because an autonomous car

Because an autonomous car will follow the rules, whatever they are. It's a given.

And the rules, presumably, are defined such that adherence entails safety. Ensuring the rules entail safety isn't adapting the law to suit autonomous vehicles as you've implied. Safety was always the intent.

The speed limit is a function of stopping distance, which is a function of mass and the dynamic friction coefficient of the surface on which you're driving. The required reaction time to stop, where a previously invisible object could suddenly dart into the car's path, is a function of the width of the road minus the width of the car. Thus, speed is a function of stopping distance and minimum reaction time.

So emus or children is a false choice, because even absent safe road rules, you could measure and derive all of the above variables to produce a safe speed at any given time such that emus or children never even arises [1]. None of this requires "social cues".

[1] Measuring the dynamic coefficient of friction is the most difficult -- at worst, you could train to recognize surface types, at best you can measure on the fly.

you've conceded (re rules of the road)

So we agree that Google et al. do no propose AI drivers but, instead, a massive shift in the mode of transportation that involves limiting public, human use of the road system.

I don't see the concession

I don't see the concession you're seeing, sorry. Perhaps you can point out specifically what line you think implies a concession.

go to India

Can you name a specific example of a social cue that's required for driving?

The interpretation of human gestures and glances in a complex context.

So what? So what? So what?

People aren't aware how stupid current software is, and how really irresponsible it would be to put current technology in control of a vehicle that has to navigate the whole world safely with passengers and endangering everyone outside the vehicle.

Emphasizing that machines will eventually be capable of assuring safety in an unrestricted environment and dangerous task is unbelievably irresponsible and immature.

Yes, eventually could be 200 years from now, but corporations are already building machines that will for the forseeable future not make it make it to a human point of competence in unusual (but conceivable circumstances) and the egos of everyone involved means that they WILL find gullible politicians SOMEWHERE in the world to allow these machines, and they WILL be out on the road very soon, not 200 years from now.

When that happens, people like you... well I'll stop here. This isn't the forum for this discussion.

People aren't aware how

People aren't aware how smart current software is, and how irresponsible it is to put irrational humans in control of a vehicle that has to navigate the whole world safely with passengers endangering everyone outside of the vehicle.

Emphasizing that humans will eventually be capable of assuring safety in an unrestricted environment and dangerous task is unbelievably irresponsible and immature.

I've seen two people killed by cars in my lifetime; the "thump" will never leave your head.

I saw a toddler run over and killed when I was 5

A saw his mother scream and cry and even I know that your argument is bullshit!

And that was what worries me. You didn't even think about my argument. Anyway I won't discuss with you further.

Your argument is completely

Your argument is completely bunk and applies to humans as well as machines.

I agree, there is no point in discussing with you further either, I'm tired of arguing with luddites.

No, it doesn't apply to humans

Humans aren't ideal but they're the best available.

We have licensing to try to weed out the incompetent ones.

In situations that require general knowledge and competence such as my emu example:
a) humans will figure out what's going on and what to do some of the time.
b) Forseeable generations of machines will not have the knowledge and perception, period.

And calling me a luddite is not an argument, it's more hand-waving in lew of an argument. It's also so untrue that I'm tempted to call it libel.

Safety and superiority is

Safety and superiority is eventually assured. ... You just have to extrapolate

I don't see evidence that technology necessarily continues to improve quality delivered to the consumer. Corporations are in the business of getting customers to accept less; that, and finding someone else to blame when things go wrong. That's not a conspiracy theory, it's just realistic. Corporations have a profit motive instead of human motivations — and, speaking of AIs and anti-social behavior, there's no obvious reason a corporation couldn't end up using an AI to make decisions, and the AI then has to be given the motivations of the corporation rather than the motivations of a human being; anything else would be irresponsible to the stock holders. Just as there's no necessity of a Star Trek-style Prime Directive when we discover alien life, there's also no necessity of Asimov-style Three Laws of Robotics when we create AI.

There's a lot of questions to ask about a self driving car.

Is it capable of noticing when the road looks unusual?

A road could have oil on it, it could have ice, it could be crumbling, it could be sinking...

It is capable of telling a hard object from a soft one? That could be important in an unavoidable accident.

Does it do physics modeling? Can it project some common sense idea of what will happen in an accident? Can it judge the safety of going over one embankment vs. another?

Can it tell the value of targets in an accident? What is human? How many people? What is alive, how important is it? Does it recognize a sign that means is filled with gasoline? Is filled with something flamable? Is filled with something hard? Is filled with something soft?

What does it do when road conditions become unpredictable? Does it have to simply stop if there has been a road-slide or earthquake because it won't be able to visually determine the safety of changed roads?

The idea that intelligence isn't important is to unilaterally decide that the whole world has to put up with a lowered level of safety and lowered levels of certain kinds of competence.

It means that you've unilaterally decided that after an earthquake it's ok for all of the machine controlled vehicles on a road to stop dead and wait for rescue by a human driver.

placement?

Um. Did you mean this to be a reply to a different comment? (Since the particular comment of mine you replied to isn't about cars, and the things I have said elsewhere about cars are substantially compatible with yours.)

Is it capable of noticing

Is it capable of noticing when the road looks unusual?
A road could have oil on it, it could have ice, it could be crumbling, it could be sinking...
It is capable of telling a hard object from a soft one? That could be important in an unavoidable accident. [...]

The relevant question isn't whether machines can handle these circumstances better than a good human driver, the question is what is the machine's aggregate failure rate relative to humans in these circumstances? Because we don't really have a gauge for what constitutes a "good human driver" relative to "average human driver".

Humans are so bad on average, it's actually feasible to literally be orders of magnitude better via automation.

It means that you've unilaterally decided that after an earthquake it's ok for all of the machine controlled vehicles on a road to stop dead and wait for rescue by a human driver.

Certainly every kind of automation has some tradeoffs. You've been decrying the irresponsibility of accepting self-driving cars due to some long list of safety concerns. If the above were truly a requirement for absolute safety of self-driving cars (0 injuries or fatalities), and in typical circumstances then virtually eliminates traffic jams, and extends productive and leisure time, do you really think that tradeoff isn't worth it on the whole?

Prove it

"Humans are so bad on average, it's actually feasible to literally be orders of magnitude better via automation."

At driving? At driving? Really? I doubt that. Citation needed.

Government

Government stats:

  • Fatalities per 100 million miles = ~33,000 (3,000 per 1 million miles)
  • Injuries per 100 million miles = ~300,000 (30,000 per 1 million miles)

Note these are conservative lower bounds, which include just injuries and fatalities, not all collisions and events that go unreported. Driverless cars:

  • Fatalities per 1 million miles = 0
  • Injuries per 1 million miles = 0

We have a generous 3 orders of magnitude difference. So for you to be correct that the driverless car data set is so unrepresentative of real driving, despite being gathered in real traffic on general roads, it's error bars must be more than 3 orders of magnitude wide. Is this your position?

Sophistry

It is clear that if there were a single fatality or even a newspaper worthy accident, then:
1) it could mean a lawsuit or criminal charges since the law has no place for a machine controlling a vehicle on public roads
2) endless bad publicity
3) a larger loss of public trust and goodwill for Google than any it has ever faced.
4) either of those would mean the end of the program, at a huge loss for the company and the careers of everyone involved

A fatality is a legal impossibility. A fatality is a public relations impossibility. A fatality is a career impossibility for every manager involved and possibly every engineer.

THEREFORE everyone who isn't a total idiot knows that every test of Google's autonomous car (or any other similar company) has as its primary requirement that there be exactly zero probability of fatalities or bad public accidents.

Therefore every intelligent person knows that the safety statistics for these tests are inherently meaningless.

There may be a case to be made for the safety of autonomous vehicles, but this sort of sophistry has no place in it.

It worries me that I'm seeing such dishonest arguments, of the sort that politicians and marketers use to bamboozle the ignorant and gullible.

I hope you don't work for the marketing department of Google, because if you do you're missing the fact that you can't argue with an intelligent person with sophistry that insults his intelligence.

While I am open to meaningful arguments, this isn't it.

Hmm

A fatality is a legal impossibility. A fatality is a public relations impossibility. A fatality is a career impossibility for every manager involved and possibly every engineer.

How do you explain the adoption of air bags? They occasionally kill people who wouldn't have died, but are safer on the whole.

Argument by non sequitur!

Nothing about airbags has any bearing on the legal and public relations situation faced by Google testing autonomous vehicles.

More absolute sophistry.

Nonsense!

What irony that your last name is scholar! Your mother was a hamster!

I expect intelligent argument not trolling on LTU

Sigh.

Parody

I was trying to hint that you might want to tone it down.

Ok, if I'm to take your analogy between

autonomous vehicles and airbags seriously, then you're arguing by assuming that autonomous vehicles are primarily a safety requirement. Ie, you're assuming the opposite of the problem a priori - assuming yourself to a rhetorical victory.

If I were to take you seriously, then we can ask why humans would be allowed to drive. Except that intelligence insulting argument has already been made in this thread, buttressed by direct insults.

Uhm so your answer to the fact that Google can not possibly test the safety of their machines by allowing even the slightest risk of fatality, and to the implication that there can not possibly be meaningful safety statistics is to assume the opposite, that the statistics are perfectly meaningful and therefore to assume that these machines should be considered a safety feature.

Why is it odd to call you a sophist again?

How about air...planes then?

I understood you to be arguing that this new technology can't be adopted because it will cause fatalities. John has informed me that I've picked the wrong counterexample, so how about airplanes? Those kill lots of people from time to time, but are safer than the alternative, I hear.

Why is it odd to call you a sophist again?

It's just bad manners. The name calling doesn't bother me except that it tends to drive out civil discourse, which we should try to preserve in our little corner of the internet.

Cricetinae

What have you got against hamsters? :-P

(Airbags are not necessarily a good example to make your case, btw.)

I'll bite

Airbags are not necessarily a good example to make your case, btw.

Why not? Not enough fatalities caused by airbags?

Lies, damn lies, and statistics

I simply don't trust the claimed statistics on the supposed benefits of airbags. When gathering statistics, there are those who ask the wrong question because they're unscrupulous, and those who ask the wrong question because they don't realize how difficult it is not to ask the wrong question. Those two cases between them don't cover everything, but they do cover a lot of territory. I don't think the right questions were ever asked about airbags, which doesn't mean airbags are necessarily a net loss... but it doesn't make them a nice clearcut example, either.

Nothing about airbags has

Nothing about airbags has any bearing on the legal and public relations situation faced by Google testing autonomous vehicles.

Air bags are not a good example, but antilock brakes are. The stopping distance on some surfaces is longer, which means some accidents could have been avoided if antilock brakes were not standard.

Electronic systems don't have to be perfect, they just have to be better on average.

1) it could mean a lawsuit

1) it could mean a lawsuit or criminal charges since the law has no place for a machine controlling a vehicle on public roads

Except some states have already passed legislation permitting autonomous vehicles, so you're wrong. Driving is a privilege which can be earned by demonstrating ability to adhere to the rules of the road. This is a bar a machine can pass.

2) endless bad publicity
3) a larger loss of public trust and goodwill for Google than any it has ever faced.

These are the same point, and are pure conjecture. Car manufacturers that still exist have knowingly covered up manufacturer defects that have cost plenty of lives. They're still here, and they still turn a profit. None of these scandals have bankrupted the company or eroded public confidence in any disastrous way.

4) either of those would mean the end of the program, at a huge loss for the company and the careers of everyone involved

I see no reason to accept this conclusion based on any evidence of arguments you've presented so far. You simply have a conjecture that, if even a single fatality occurs, public confidence will make the cost of this whole enterprise not worth it in the end. Except you don't really know how much profit Google stands to gain from this endeavour, and you don't know the liabilities involved, so you cannot possibly estimate the result either way.

The only meaningful comparison would be the collision statistics on the exact same roads the automated cars have driven. Guess what I can guarantee you: that collision rate for human drivers is not zero. You can claim sophistry all you like, the numbers don't lie.

Therefore every intelligent person knows that the safety statistics for these tests are inherently meaningless.

Right, so you basically just have a supicion of nefarious deeds that you are somehow elevating to the level of evidence. Instead of being charitable and provisionally accepting that Google researchers actually want to produce a safe and useful device to increase mobility for people who aren't so mobile, like one of the very researchers under discussion who is legally blind and so can't drive, you think a conspiracy is a more likely explanation of the evidence.

And you are calling your opponents in this disagreement dishonest?

Quick sophistry count

I have work I have to get to, I'm torn because the level of sophistry in your arguments is so high that I can't let them sit, but it's also so high that you don't seem worth conversing with.

"Right, so you basically just have a supicion of nefarious deeds that you are somehow elevating to the level of evidence."

For instance, this. What the hell is this?

I actually said that the liability and public relations aspects of introducing robots as drivers meant that Google is not at a stage where it can take the slightest risk of killing people on public roads with a technology that has never been proven in the slightest.

How anyone can turn such a statement of the obvious situation into "nefarious deeds" I can't imagine - perhaps a psychiatrist could.

Trying to invent a robot is nothing at all like being a car manufacturer, in that the public has already understood and agreed to the trade offs involved in manufacturing cars. No one has demonstrated the safety of robot drivers, so you don't get a pass for doing something harmless and useful if you make a robot driver and it happens to kill people. It wouldn't be a mere mistake it would be a sign of the utility of an unproven technology.

I actually said that the

I actually said that the liability and public relations aspects of introducing robots as drivers meant that Google is not at a stage where it can take the slightest risk of killing people on public roads with a technology that has never been proven in the slightest.

No, you actually said the above, and then inferred that therefore any evidence of drive automation must thus be meaningless, and anyone who says otherwise is then an idiot and/or a liar.

Only this conclusion doesn't follow, because it neglects the possibility that you could just be wrong about the possibility of low risk driving automation technology already existing in a testable form. And instead of considering the possibility that the evidence might indicate you are wrong, you created a narrative to explain away your cognitive dissonance, positing instead that the researchers are simply dishonest in how they've presented this data by claiming they've completed a million autonomous miles with no accidents. If this isn't "nefarious", I don't know what is.

Trying to invent a robot is nothing at all like being a car manufacturer, in that the public has already understood and agreed to the trade offs involved in manufacturing cars.

Car manufacturers continuously introduce new technology that people haven't understood or accepted. We have cars that park themselves, and we have cars that can drive themselves by staying within lanes and keep proper distance to other vehicles. And this is technology that's already been available for years. So I disagree, autonomous driving is exactly like being a car manufacturer, because this is simply a progression of the technologies car manufacturers have already deployed, let alone what they have currently in development.

from the beast's belly

The San Francisco Chronicle's front page Sunday March 22 had an OK article that at least begins to draw down the car hype. It also describes Google's urban focus and points out that this is a shift in mode, not an AI driver. It points out evidence that cars must be real-time networked. (And, of course, they will be under central control through automated software updates, if nothing else.)

(Reports of the recent "cross country" trip also showed some new savvy, pointing out that it was only hours after the trip started that a human driver had to take over due to some ordinary construction work.)

The social significance of articles like these is that the major investor community is starting to learn to see past the hype and understand what is really being talked about.

Why can't many "typical programmers" like those on LtU?

I take it you mean this

I take it you mean this SF article? Hype is usually a little premature, but even that article doesn't really contradict anything that was said in this thread, so I'm not sure what hype you think is being preached here.

Re: evidence that cars must be real-time networked, they say that "full efficiency" is achievable via networking, which doesn't entail they must be real-time networked. Due to the dangers of the increased attack surface, we could just as accept sub-optimal-but-still-better-than-human efficiency.

Given these are still research projects, the control systems they use for managing speed and distance will be tweaked, probably enough to address the trailing car problem without requiring networking. I'd be very, very surprised if this couldn't be done. Networking is just the most obvious solution, if you don't consider the security implications.

Check back in 5 years, and we'll see what happened!

re: "Given these are still research projects, "

I think by the time you start legislating around them, you are looking at industrialism, not "research".

"Check back in 5 years, and we'll see what happened!"

What is with your passivity and lack of concern for larger social implications? Is this blog not concerned with engineering? Don't be a "tool", if you know that old slang.

I've yet to see an ap

that can reliably and instantly decode street signs. I've yet to see a computer decompose a scene between "road and not road" let alone do this instantly. I've yet to see a computer watch a movie of cars and simulate what could happen next, either as good driving or accidents.

And yet we are, shockingly, reading endless comments that expect us to follow them in the weird assumption that it is proven that computers are better drivers than human beings.

This is such an inversion of reality that it IS, to use one mocking phrase, 'nefarious', but it isn't coming (yet?) from Google's marketing department, it's coming from random people in a comment section. It's as if we have a pro-machine driver ideology already clouding men's minds before we even have machine drivers.

In an atmosphere like this where people are demanding an outcome, reality be damned, it's pretty scary that an industry is ready to grow into this crazed demand.

In short

there is absolutely no evidence that computers can, in real time, understand a driving scene and project an outcome, let alone plan.

Put up or shut up. Demonstrate that the basic capabilities needed to safely understand the road exist.

Just getting a car to follow a road based on some predetermined map and some distance algorithms hardly proves the basic ability to understand a driving situation and plan a sane response.

Chill, seriously.

Put up or shut up.

I'm not going to take a side in this argument, but I'd like to point out that you're repeatedly violating standards of dispassionate discourse and insulting and demeaning people who are you directly interacting with here. Calling your opponents' arguments "sophistry" is an ad hominem, assumes bad faith, and is not a counter argument.

What is with your passivity

What is with your passivity and lack of concern for larger social implications?

Don't mistake a recognition that a consensus won't be reached for passivity. Only time will prove who on this thread was more accurate.

no. and yes.

you are both correct, and both wrong, I dare say, about the similarity / difference of current car features vs. autonomous vehicles.

In other words: It is a subjective, perception, (marketing, ignorance) thing. The fact that Toyota hasn't been taken out in the back parking lot and put down like Old Yeller indicates how frankly sad the entire situation is.

I believe the key subjective lame ignorant issue in perception is between "do I have my hands on the wheel" and "I do not have my hands on the wheel", simple as that. The wheel could even technologically be like that of Maggie Simpson.

(By the way, regarding airplanes: While they are supposedly safer than some alternatives, I personally find the whole trend towards utterly automated hands-off everything to be utterly wrong, evil, disturbing, and laced with (hopefully unintended) consequences. That's one of the things I believe UX folks know from e.g. the SFO crash, if not others. If I were as rich as Branson my own airline company would fly only 747s pre the 400 model.)

Million autonomous miles _is_ meaningless even on the face of it

unless you also specify how many experiments there were where a human had to take the wheel to avert and accident, and how routes and times were selected.

No one believes that they did scientific sampling of routes and times and times in a way that was designed to simulate typical driving and typical accident rates let alone that chooses the sort of routes mostly likely to cause an accident.

Are they really claiming that their machines are all ways superior to humans? You know people's reactions, assuming the superiority of computers reminds me that the public has made that mistake since the invention of the computer even when computers were made of relays and later vacuum tubes, with the help of IBM's marketing department.

Are they really claiming, 1950's pop nonsense style, that their computer drivers are infallible, maybe because they lack human weaknesses of annoyance, fatigue, emotion and use of alcohol?

Such claims should be made by a deep voiced announcer and show men in conservative business suits of the sort IBM's policies used to mandate.

Skepticism is always warranted

Skepticism is always warranted, particularly when the data set isn't publically available, but outright dismissal is not. Neither of us are experts in this specific field, we only argue from our own ignorance. Yet experts in this field project the availability of such vehicles in 5-10 years.

You dismiss this as absurd. By your own admission, liabilities resulting from any fatalities and injuries align their incentives with not exaggerating their own claims. So what possible reason could I have to believe your ignorance over mine, particularly since mine aligns with expert claims?

People ASSUME the

People ASSUME the superiority of machines, even when no such machine is yet possible.

Sorry, but this is just plain wrong. Self-driving cars have driven literally millions of miles with zero accidents and zero traffic violations due to the machine. The one accident that did occur was when the human took control at the end of a journey.

The human failure rate over this distance is laughable by comparison. There's no assumption here, there's actual data proving that machine drivers are better.

EVEN THOUGH NO MACHINES EXIST YET THAT ARE EVEN PASSABLE DRIVERS IN GENERAL CIRCUMSTANCES.

The only circumstances where people are still better is in cold weather, because ice, salt and snow buildup block the sensors. Then again, these circumstances haven't been as much of a focus. Like any problem, you start with the subset that you can address (relatively) easily, and build from there.

Solving the wrong problem

I'm happy to believe that automated cars will be less dangerous on average, under average conditions, compared to average humans. Yet, I still expect the whole development coming to a halt as soon as the first lethal accident is caused by a software bug in such a car, and a court rules that the manufacturer is at fault and has to pay 300 million in compensation.

Unless, of course, manufacturers are successful with lobbying lawmakers to shift the responsibility for the risk and its potential costs on the general public -- a strategy that the nuclear industry was extremely successful with.

In any case, I think that self-driving cars actually contribute zero to solving the real problem with mobility. In fact, they will likely make it worse -- by prolonging the life cycle of an outdated technology (cars) that flies in the face of sustainability and responsible use of resources. Automation is just another techno-religious distraction in lieu of necessary cultural changes.

Certain specifics of liability are uncertain

Certain specifics of liability are uncertain, but I think the broad strokes are largely answered by precedent. If you fail to stop due to a manufacturer flaw in the brakes, they are responsible. If you fail to stop because you didn't bring your vehicle in for proper maintenance, you are responsible. If you modify your vehicle beyond manufacturer specifications and it does something unexpected, you are responsible.

I doubt very much that drive automation will stop wholesale on the first fatality anymore than the use of software in mission critical medical or avionics equipment stopped on the first fatality. I think the manufacturers understand the liabilities pretty well. They have decades of experience with such issues.

In any case, I think that self-driving cars actually contribute zero to solving the real problem with mobility. In fact, they will likely make it worse -- by prolonging the life cycle of an outdated technology (cars) that flies in the face of sustainability and responsible use of resources.

I personally think the answer is a convergence of information technologies that are just now coming to fruition: automated cars with Uber-like services with Zipcar-like shared car parks/pools interspersed throughout a region. This permits other efficient options than the high cost and high convenience of car ownership, the moderate cost but low convenience of public transit, or the high cost and moderate convenience of taxis.

Frankly, I suspect that redirecting the costs from public transit into subsidizing this sort of solution, when it's ready, would yield comparable costs to transit (at least here in Canada with our sprawl problems where transit is not so cheap, and very inconvenient).

The real market for self

The real market for self driving cars are cities like Beijing that lack room for anymore road infrastructure and have serious mobility problems. If you took humans out of the equation, you could increase capacity by 3-400% and that isn't small. Not to mention parking infrastructure...imagine a fleet of self driving taxis that optimize road bandwidth and require much less parking.

Surely subways help, we are building those, bikes help, we have ALOT of those also at densities much higher than Europe, but they aren't the only answer, this can really be transformative. If you live in a developed country with nice traffic and low pollution, ya, I can see how this doesn't seem necessary. And not all countries are litigious twits about it (especially china where the accident rate is already very high and the government extremely autocratic).

But once one society gains a competitive advantage with new technology, the others are forced to adapt and use it to be more efficient even if they don't really want it. Look at Uber in Europe right now. The demand is high, the resistance is high, so it goes back and forth, but the end equilibrium is kind of obvious; it is too useful to ignore, taxi industry be damned.

Big city driving

... where you can't even get away with following the laws because that would involve unacceptably blocking totally congested traffic illegally weaving it's way around illegally parked cars. Where there are pedestrians everywhere they shouldn't be... drunks and drug addicts and suicidal people wandering into traffic... I've even seen a mother push a small child out into traffic, then follow when the cars spared her little girl.

There is where you there is no way that a machine could handle the environment without generalized intelligence.

Either that or everyone just has to get used to dodging moronic robot cars. Remember that if a hobo gets his blood on your robot, you can throw a stack of dollar bills at him as you drive away.

Uber

Uber finally got banned in Germany, and they have withdrawn from the market. I doubt they'll get a second chance.

It doesn't matter what the

It doesn't matter what the taxi lobby wants, if the people want ride sharing, they'll eventually get ride sharing. You can only protect a legacy industry for so long. Eventually, the dinosaurs really do die.

We've got a house cat

We've got a house cat living with us atm, who likes to sit on our kitchen windowsill and watch the dinosaurs eating sunflower seeds from our feeder.

no cat ever got fired

for buying IBM.

Who is the bad guy?

I'm not sure whether the "taxi lobby" is the bad guy here, or maybe rather an IT lobby that is trying to bully through an exploitative business model that is based on undermining all social and cultural accomplishments under the propagandistic label of "sharing economy".

no bad guys

There are no bad guys here. The taxi drivers and companies want to protect their interests, that is only natural. The consumers want a better experience, that is only natural.

Have you tried a ride sharing service before? I have, and they are much better than the taxis I could take: the cars are better, the drivers are more professional, and its where I want it when I need it. This isn't Uber (which just started up in China), but rather didi zhuang che.

Now, the taxi drivers can adapt; e.g. through taxi hailing apps. I use that (didi kuai che, the same company doing the ride sharing service), and I benefit from that, but they wouldn't have been there without competition and pressure to innovate.

Dishonest business

I have used it, in the US, though only as part of a group, since I wouldn't do it myself. It was a mixed experience, neither better no worse than taxis. But that is besides the point.

When you say "ride sharing" and "professional drivers", that already demonstrates how fishy and dishonest the business model is. Because both cannot be true at the same time! Either it is "sharing", as they try to pretend, or it is brokering a "professional" service, like it turns out to be in reality -- except far below established standards regarding income and social security for the driver, and insurance and consumer protection for the client. Some drawbacks notwithstanding, these standards exist for a reason, and their introduction and enforcement generally is a cultural achievement. Allowing a race to the bottom is not in society's best long-term interest. (At least where I live, I cannot judge China.)

Ride sharing is obviously

Ride sharing is obviously just a hired car, no one has any pretense otherwise.

The standards exist mainly to preserve monopolies and keep new entrants out, there is nothing about driver well being (who are often immigrants at the bottom of the food chain without any health insurance before ACA, at least in the states). But again, the point is moot: taxis are a bad deal and are often unfriendly, the better alternatives will eventually win by simple economics.

the better alternatives

the better alternatives will eventually win by simple economics.

[citation needed]

History of the industrial

History of the industrial revolution.

And of course, the structure of scientific revolutions.

The zeitgeist falls eventually.

"history of the industrial revolution"

You should read Capital.

Bottled water, Betamax, $4

Bottled water, Betamax, $4 coffee, designer clothes, Nike shoes, jewelry, diamonds, and a raft of faddish consumer items would all tend to disagree with you here.

This comment can almost be

This comment can almost be read to the tune of "we didn't start the fire".

Very misleading

"Sorry, but this is just plain wrong. Self-driving cars have driven literally millions of miles with zero accidents and zero traffic violations due to the machine"

That's only because they know that their systems have extremely limited capabilities, so they make sure that they're only driven in controlled circumstances.

It's a bit like saying that a baby in a walker proves that the baby could ski all day without accidents.

That's only because they

That's only because they know that their systems have extremely limited capabilities, so they make sure that they're only driven in controlled circumstances.

Yes, extremely limited circumstances like the hair pin turns on Lombard street, or general traffic conditions on the Golden Gate bridge. It doesn't seem like you're familiar with the testing that's been done.

Humans aren't safe drivers either

The total deaths from driving accidents amount to 1.24 million per year (WHO 2013). That's about the same as the number of deaths from the 2003-2011 war in Iraq.

No Goals

I disagree with this view. AI will never have goals of its own, only the goals we set for it. If we don't set goals, it will just do nothing.

Evidence?

And what do you base that assumption on? Natural intelligence has no real "goal" either, and still has a pretty impressive history of making earth a shitty place.

Natural Intelligence

NI is a product of evolution, which is a constraint satisfaction problem set by the fundamental rules of physics. The goals of NI are implicit because it evolved, hence its goals are the natural emergent properties of the laws of physics. Anything we create (AI) by definition only has the goals we set, it is after all Artificial Intelligence. To have implicit goals AI would need to evolve naturally from the laws of physics, which would make it NI. Therefore AI can only have exicitly set goals. Even if we create a virtual environment in which to artificially evolve AI, the initial conditions and optimisation used in the artificial evolution amount to a human set goal.

Yes

I forgot. Mankind always had complete control over the technologies it created. Especially the complex ones. It never introduced unintended side effects. There never were any accidents. And after the mystical singularity, which is loss of control by definition, of course anything that evolves from there would still remain strictly bound by the initial "goal" we set, transitively, through all generations, forever. Because it is bound by our logic and programming, we are the perfect engineers, and our logic and programming infallible. I mean, doesn't quite work for device drivers, but surely will work for AI.

I'm so glad that our field is free of hubris.

Bugs and Straw Men

Well, a bug in fulfilling a human specified goal could have unintended consequences, but it is unlikely to be 'destroy all humans'. It is a staw man argument to claim that because it is likely to have bugs trying to fulfil the specified goals, that some disaster scenario we imagine must happen.

If we imagine a strong AI that exists to play chess, and its goal is to win, bugs may make it lose games it should have one, but they will never make it connect to the internet or do anything else apart from play chess.

Natural artificiality

I think your argument hinges on assuming that the term "artificial" refers to some distinction in the real world. If humans are a product of nature and AI is a product of humans, then AI is a product of nature. Nature can build AI by building humans.

As a general reply to others' points about the Singularity: Human society is already a distributed, unfathomably intelligent global system that partly cultivates and partly exploits its available resources (including humans). There's no reason we would give up thinking about hard issues just because we invented some technology to help us deal with them. On the contrary, better solutions than ever would be within our grasp, and the race would only become more intense. So I think "the Singularity" will be a term like "modernism" that sounds fresh, new, and invincible until it hits a post-.

misunderstanding

No, that is not the argument. The argument is only self-evolved things can have implicit goals.

If a bunch of crystaline silicon, influenced by nothing but the laws of physics in the universe spontaneously organised itself into self-replicating circuits, then they might have their own implicit goals.

However anything we build as humans we choose the goals, whether explicitly, or as the emergent properties of rules which we choose.

We always chose the rules, so we should be careful about the rules we choose, but it will always be under our control. Now if we choose to create AI with destructive goals, it will be us to blame.

It will be us to blame.

I assume everybody agrees with that statement, anyway.

I think people are probably somewhat in agreement, and are really getting tripped up by words (and maybe by the tone of other threads on this page).

We know we'll set the goals, but we also know that we are not able to know that we've set goals completely safely, otherwise we'd not have the Arianne or any # of other eff-ups.

Furthermore, if you are making the AI for the military, the goals you make will be about killing humans, one way or another. So that's obviously not reassuring at all.

Goals and blame

"It will be us to blame" is simplistic. We do have responsibility as long as we're directly building and interacting with a particular system, but that's like a parent's responsibility for a child's actions.

Not everything we build is intended to have specific goals. Some especially successful things are rather general-purpose, such as money. Their success is thanks to natural selection: They're more adaptive systems, adapting to various situations which happen to have humans in them.

It doesn't matter which humans invented money or what their goals were. Not anymore.

Only Specific Goals

My argument is you cannot build an AI without giving it specific goals.

You should really talk to

You should really talk to some ML researchers. I used to have lunch with one every day (DNN training on GPUs, a speech recognition guy). They never gave their models specific goals...actually, it turns out you can't do much with specific goals. They are worthless.

"a speech recognition guy"

How did he know he was building speech recognition? Maybe his DNN was into checkers.

Training sets and parameter

Training sets and parameter twerking. It is amazing how much of it is just tweak and pray.

That is not AI

That's just an associative memory. Even deep-learning is just a multi-level associative memory. Categorisation is an emergent property of having multi-level associative memories. The big breakthrough for deep-learning was training one level at a time, other than that there is nothing new since Kohonen, apart from the dramatic increase in computing power.

We are basically building

We are basically building (or trying to build) mini brains. The logicians lost the AI mantle long ago, ML is doing much better than hand coded rule based systems in realizing "intelligence."

But if you want to call the stuff that doesn't work AI and the stuff that does not, then I agree with you based on your arbitrary selection of terminology.

Fantasy

Recognisers (associative memories) even deep-learning ones, do not think, and do not have goals, they simply probabilistically recall data associated with an input vector. No matter how sophisticated we make them they will never have goals of their own. The idea that they do seems to come from a lack of understanding of what they actually are, and what they can do. People working in the field like to paint pretty word pictures (probably to get grant and investment money), but look at the mathematics, and the code, and its a different story.

Our goals do not originate in the 'thinking' part of the brain, but are pre-encoded in the lower brain functions and endocrine system. We want shelter because we are cold. We sense cold because the beings that did not sense cold died out in evolutionary selection.

If you build an AI and put it in a cold room, it will freeze, because it has no evolutionary built-in goal to keep warm.

If you use genetic programming to simulate evolution, its just a simulation where we set the the selection-function. We are in effect just asking for survival of the AIs that are optimal for our chosen selection function. The AIs goal is the selection function.

As for logic, it took me a long time and a lot of education to understand maths and logic. It seems logic is a higher form of intellect than choosing between a chocolate or blueberry muffin. As such computers doing logic is great. Why build a computer with a rat brain and spend a huge amount of effort teaching it maths and logic just to get something that is not as good as logic systems we already have. Proof-assistants can already help people find better proofs, and find proofs no human has yet found. If this isn't intelligent, then we obviously have different definitions of intelligent.

Amusingly people seem to want artificial-stupidity (capable of making the same mistakes humans do) more than that want artificial intelligence.

What associative memories are very good at is pattern recognition and classification. I see them in very much the same light as Kohonen, they are just another algorithm to use in a programmers toolbox. There is nothing magical or mystical about them. People take them and solve problems previously thought hard, and then claim some kind of magic, but its not. The algorithms make sense if you understand them, just like quicksort does (they find the local minimum of a difference function).

The DeepMind video game demos, where an 'AI' learns to play classic video games, is based on a human defined goal of entropy-minimisation. If its input is the screen pixels, and the output is the game controller, its not going to connect to the internet and try and take over the world. Its not really any more sophisticated than high-frequency trading 'AIs' trying to maximise profit. If we let such things crash the stock-market its our own fault.

I guess if you gave a high-frequency trading AI the ability to send emails, it might start creating all sorts of interesting and convincing spam, and could potentially take over the world if it convinced the right people to follow its advice. But in all that it would still be bound by its goal to maximise profit, taking over the world would be a consequence (although possibly an unseen one) of its goals and the IO mechanisms given to it.

If we put a sufficiently strong AI in a humanoid body, then it could use all the same tools a person could in pursuit of its goal to maximise profit. That could get interesting, however we set the goal (maximise profit) and we could easily change it.

But perhaps if we could copy a brain, down to the chemical reaction level, and simulate it on a computer, we would get a copy of the original organisms goals, which although not 'evolved' goals would at least be unknown to the programmer. Maybe that's good enough?

Your argument is straight

Your argument is straight out of the 70s! I mean, when the logic vs. learning debate was settled.

Why build a computer with a rat brain and spend a huge amount of effort teaching it maths and logic just to get something that is not as good as logic systems we already have.

Because elephants (or rats) don't play chess. There are limits to logic, HARD LIMITS, which is why we will eventually move on to ML. Code will be osbsolete someday.

Proof-assistants can already help people find better proofs, and find proofs no human has yet found. If this isn't intelligent, then we obviously have different definitions of intelligent.

Yep, we definitely do! Math isn't intelligence, its math. Real intelligence is being able to make sense of the world. A proof is just a proof and nothing more. Again, go re-read Rodney Brooks' "Elephants Don't Play Chess" paper.

Our goals do not originate in the 'thinking' part of the brain, but are pre-encoded in the lower brain functions and endocrine system. We want shelter because we are cold. We sense cold because the beings that did not sense cold died out in evolutionary selection.

Ya, that's in Rodney Brooks paper also. Actually, its a core part of it (sense, reflex, actuate).

But perhaps if we could copy a brain, down to the chemical reaction level, and simulate it on a computer, we would get a copy of the original organisms goals, which although not 'evolved' goals would at least be unknown to the programmer. Maybe that's good enough?

We don't even need to go there. We are just overglorified pattern matchers, any pattern matcher we build that reaches or exceeds our complexity will be unambiguously considered "intelligent." Or I guess it depends if you believe in a God that made us special. If we can build ant brains today, rat brains next week, dog brains next month, why would we never achieve human brains?

We are just overglorified

We are just overglorified pattern matchers

If there's one theme that pervades the entire history of AI, it's consistent underestimation of the difficulty of the problem.

Or I guess it depends if you believe in a God that made us special.

No, it definitely doesn't depend on that.

If we can build ant brains today, rat brains next week, dog brains next month, why would we never achieve human brains?

A more interesting question is why we would do so. I don't mean to suggest there mightn't be good reasons to do so, but history suggests to me that when we do, those good reasons mostly won't be why.

Copying a brain is not artificial intelligence

We are certainly not just overglorified pattern matchers (in response to the grandparent post). This is the first part of my argument - a pattern matcher no matter how glorified will never develop goals. It will always just be a pattern matcher of increasing complexity. A pattern matcher with temporal feedback has interesting behaviour (sequence memory and unfolding), and we definitely do that. For example changing gear in a car is a complex kinesthetic memory, and recalling it (via association with the desired outcome) clearly unfolds the learned muscle patterns carry out the activity. But no amount of associative memory imparts 'will'. Nothing creates the desire to go to drive to the shop to get a doughnut because it is hungry.

What makes us special is that our goals are set by the laws of physics and the fundamental nature of the universe. Amino acids and self-replicating molecules are the primary form of life, and we are just sophisticated machines for their propagation. The chemistry of self-replication, and the competition for resources between different self-replicating molecules, result in the continuous improvement of evolution, which leads to us, and our evolved goals. (However unlike some people I don't see this has any impact on belief in God one way or the other. Unless you consider this to be like a 'Babel Fish': http://www.whysanity.net/monos/hikers.html ). Any attempt to replicate this will not be using the real nature of the universe for selection (which is unknowable, we can only approximate), and will be using goals and selection functions that we have chosen.

As for building ant brains, yes we can build a simulation of an ant's brain, its usful for understanding ants but its not really anything new. Its neither more intelligent nor better than an ant, in fact its a lot less mobile and has greater difficulty interacting with the real world.

Technically copying a human brain and current state into a machine makes it a cyborg, not an artificial intelligence. Its not really AI at all, more like giving a person an artificial heart. You don't call a person with an organ replaced by a synthetic equivalent an artificial person, why consider the brain any more special (assuming you can copy its current state at all)? I might be tempted to consider a simulated person brought up from birth as a computer simulation an artificial-person, but I think simulated-person is probably a more obvious term.

How to generate goals from pattern matching

Whatever next? Predictive brains, situated agents, and the future of cognitive science
If I understand this paper correctly it says that perception cognition and action can all be understood as predicting something and then adjusting based on the errors of that prediction.
If you train a predictive brain on some input it will attempt to maintain normality by seeking situations where the input looks like the examples it was trained on.

Myriads of sub goals are all generated from a single goal: minimizing the surprisal when matching the inputs to stereotypes.

(we probably agree with each other, I think this counts as a selection function)

Learning and Conciousness

Minimising surprise has been about a long time (Numenta stuff). Similar to minimising entropy (more easily understood as maximising future options) this is a human selected goal. The unfolding of sub goals from these kind of high level goals clearly works using hierarchical sequence associative memory.

With many possible options choosing this high level goal rests with us.

Humans have many other goals driven by endocrine system and lower brain, tierdness, hunger, avoiding pain, avoiding cold, etc.

These other goals need to be chosen by us. We might choose minimise harm to humans (not a good goal as "harm" and "humans" are not simple concepts). This is the problem with Asimov's laws, they require implicit knowledge of several high level concepts which require significant training for an AI to recognise.

In order to provide this training I think we need to understand transcendental elements like joy and suffering. Humans and animals are educated by rewarding good behaviour, and they learn to avoid pain. How do we experience such things. How can an associative memory experience joy or pain? Until we can answer this, it won't really be AI. We need to understand consciousness, empathy, joy and suffering.

We can already create

We can already create programs we don't understand. Heck, just write the game of life and play with that! How often do you write code and something unexpected emerges? All the time? You might input a few simple goals into your AI and some unforeseen combination of them causes them to decide to wipe all humans out; it's so forseeable that we've have movies about it forever.

it's so forseeable that

it's so forseeable that we've have movies about it forever.

We have even older books about it. Consider Mary Shelley's Frankenstein. Creators of life and intelligence, beware!

That book inspired me to get

That book inspired me to get into computer science when I was in college.

Vampires

We also have books about vampires.

...and

Also The Sorcerer's Apprentice comes to mind.

Straw Man

See above, program to do A plus bugs, does not equal program to do B. There is a finite possibility all the air molecules in the room will all move into the same corner at the saMe time and you will suffocate. In the same way there may be a finite probability that bugs in your AI could lead to anti-social behaviour, but it seems about as likely to happen.

Fictional depictions tend to

Fictional depictions tend to the specific and simple; reality has a vast breath of options. We are, through our technology, getting more and more powerful; just watch the Secret Service trying to catch up with what individuals can do with "drones". People make mistakes, people make mistakes when telling technology what to do, and the people making high-level decisions are not necessarily the most virtuous and clever amongst us. Actually, I've been thinking lately that government best serves its primary social function by being really inefficient, because its primary social function is to keep people like this out of the way. As we continue to get more powerful, and we don't get wiser (and we're in no danger of creating technology any time soon that's even as wise as us, let alone wiser), the probability of making a mistake that wipes us out just keeps going up. I don't claim it's an absolute certainty — the forecasting problem, again — but I don't buy that it's unlikely.

Exceeding Bounds

A bug in a chess AI is likely to make it play bad chess, but it is not going to make it take over the world (after all the world as far as the chess AI is concerned is the board).

A mutation in a biological

A mutation in a biological lifeform is likely to make it die early before reproducing, but it is not going to make it take over the world.

Well, we shouldn't be here, I guess. Human beings are just the result of a long string of lucky mutations that couldn't possibly happen. The creationists are right ;)

Self Replication

Humans exist as the result of an optimisation of the fundamental laws of physics. It is an optimisation, not random (the mutations are random, but the selection function is not).

This has nothing to do with the AI question, as there exists no mutation (apart from that which we chose to put there) and no selection function (apart from that which we chose to put there).

In the context of the chess program, it does not mutate, reproduce, evolve etc. We write a chess program, it stays a chess program, even if it uses the same algorithm for machine learing as the human brain, it is never going to be more than a chess program.

Bugs are quite similar to

Bugs are quite similar to "mutations", but the point is, emergent behavior is real and here today already. We are already beyond the point of micro-managing systems, they are too complex for that. And once you have given up full control, effective behaviors that you didn't anticipate are completely possible.

Self replication and evolution missing.

I think random mutation without self replication and evolution would still be producing variation s of long chain hydrocarbons, and nothing resembling life or intelligence.

Edit: further bugs do not accumulate over time in existing code. Any form of cumulative mutation would be something we gave chosen to put there.

Debugging and bug fixing is

Debugging and bug fixing is a form of natural selection (the bad bugs get fixed, but what about the good ones?). The point is moot however as we have already thrown replication and evolution into the mix in ML.

not quite

In ML we set the selection function for genetic techniques, it's not at all the same thing. What is the selection function for nature.

Edit: in case this was too terse, if we set the selection function we define the goals, which goes back the the original point: artificial intelligence has no goals except those that we set.

dood

The chess example isn't really relevant. The examples need to be:

* AI written by a crazy person who wants to take over / destroy all humans.

* AI written by governmental person who wants to be able to most effectively suppress / kill other humans.

now add bugs into *those*. Heck, if we're lucky the bugs will *save* us! hardy har har.

not quite

Again those AI have specific human set goals. If you create a robot with a weapon and facial recognition, bugs will mean it will shoot the wrong person. It will not however suddenly decide to take over the world, or change its goals. You might get some strange behaviour with a goal like minimise entropy, or minimise harm to the greatest number of people, but we would still set those goals.

So you will only get a computer trying to take over the world if you give it that goal, and "take over the world" is a difficult thing to define.

My argument isn't that you can design a dangerous AI application, but that your AI car will not turn into a megalomaniac.

Three Laws of Robotics

How AI might decide to take over the world starting from reasonable goals set for it by humans was one of Isaac Asimov's plots. I find it pretty far-fetched and tend to side with you more in this argument, though.

ok

i can dig it.

Doesn't mean we aren't doomed, tho. The goal of "taking over the world" isn't the only one that could make my own existence be hell on earth. There's probably a zillion narrow goals that have shown up in SF writing, or could be invented by writers of SF.

just a tool

There are plenty of other ways bad people can make peoples lives miserable. Most SF misunderstands AI, I just don't think my toaster is going to want to take over the world. Perhaps the behaviour of HAL in 2001 is the best representation in SF of AI goals gone bad (in HAL's case conflicting goals, and goals kept secret from the crew). Still it did not want to take over the world.

Alternatively,

maybe you just need to read different SF authors.

I'm open to suggestions

Do you have any recommendations? I could critique their interpretation of AI, based on what I think is realistic.

we're already there, if we're critical and honest

Simply enforcing the rules that are already on the books (in any even remotely "civilized" nation-state) to the degree possible with technology is enough for me to see it as hell on earth. Because even if it isn't used on everybody all the time, it is huge leverage for those who can use it against, say, whoever is running against them in an election.

This isn't AI

You are talking about human nature, not AI.

Goals

I thought he was talking about what we're likely to tell AIs to do. Perhaps I misunderstood.

Other People

People can equally well tell other people to do these things, not just AIs, so I am not sure what it really has to do with AIs.

One thought

is that people may have consciences and some degree of common sense, limiting their blind obedience to stupid rules. It's not self-evident that either of these would be true of an AI; an AI might actually do what it's told to, when given the same directives. No idea whether that's what raould had in mind.

An estimate on the number of lines of code Humans can produce

It is unclear what you mean code human civilization can maintain.

What is easier to approximate is the number of lines of code human beings can generate.

Estimates for the GDP of the United States sit at $16.8 Trillion
Estimates for the average cost of a line of the code in the US is about $40.

So if the entire GDP of the United States was put into generating code the US could generate in the ballpark of 420 billion lines of code a year.

At perhaps 40 characters per line of code. That comes to a potential to generate 16.8 Terrabytes of source code a year.

Currently we have not 100% people employed making software but about 3% of the population of the US making software. I suspect somewhere between 3% and 9% is the proportion of the population that actually has the skills to deal with software.

That said the US population is about 4% of the world population so the above numbers for 100% of the US making software are arguably in the ballpark if all of the people with the skill to make software in the world were generating software right now.

Unless there are other limits this suggests several things to me.

- Programming languages need strong support for separating concerns.
(Otherwise in the large people will be tripping over each other by
accident).

- That large number of bugs will be about human miscommunication at the interfaces between components. So programming languages should be designed to help people communicate better at the interfaces between software. AKA there is great value in software components that can not be used wrong.

- That large number suggests that classes of mistakes and problems that are rare and uninteresting in small systems become classes of problems that we need to grapple with for larger systems.

re: how much "code human civilization can maintain."

I think it is not right to frame the metric in terms of code "generated" from available labor.

It seems to me there is some amount of software infrastructure that is "critical" in the sense that if enough of it stops working in an unplanned, uncontrolled way -- an irreversible social catastrophe has happened.

Critical infrastructure software has to be kept working. It has to be "reproduced" constantly, and constantly adapted to changing circumstances.

Critical infrastructure software exists in a hostile environment of changing hardware, changing requirements, and malicious attack.

A resilient political economy has to reconcile the risks of software infrastructure failure with the holistic labor costs of making that unlikely.

I think you are making an

I think you are making an unwarranted assumption in asserting that critical infrastructure software has to be "reproduced" constantly, and constantly adapted to changing circumstances.

If the software is critical enough people will base their requirements on what the software needs to function reliably. The most common of those requirements is that the software not be connected to the network. By not connecting my anti-lock brakes to the internet my that software prevents itself from being an existential threat to society.

In a similar vein there is a lot of backwards compatibility in hardware making it trivial in many cases to run critical software on the next generation of hardware.

That said I think it an interesting question of where are we likely to see failures that are at least as severe as a hurricane, hitting a major city. I have a hard time imagining a failure that can not be fixed with by reinstalling and restarting the software in question.

That said there are two scenarios that I have seen talked about that may make an interesting part of this conversation.

- Software on a public network that protects a secret. The secret that if it got out would result in something horrible like world-war 3 starting.

Aka an attack and defense scenario. In that case the budget of the defenders needs to be higher than the budget of the attackers because attack and defense are essentially the same process (up to the point where are of the exploitable bugs are fixed). But defense has the added job of not just finding the bugs but of fixing the bugs as well.
Further defense has the challenge that the attackers are likely to find the bugs in a different order than defense will.

So short of the defenders reaching a formal proof that software is not exploitable defense has to work much harder than the attackers. Castles have a similar problem in that it is much cheaper to fire a cannon ball that will pass right through a castle than it is to build a castle wall that will keep out cannon-balls.

Unlike conventional warfare where I can give you a proof from the rules of physics that there will always be something that can penetrate your castle wall, I do not yet know of anything that says perfect software is impossible. Perfect software is simply very improbable at the moment.

Of course the best way to deal with an attack/defense situation is to figure out how to make friends. So you simply don't have to worry about attackers.

- Systems that fail in the worst possible way because they are so complex we simply can not anticipate and defend against all of the failure modes.

This is a scenario that has been studied since the three mile island accident. A lot of insight has been obtained in those studies but I don't know think any clear conclusions have been discovered for dealing with those kinds of situations. There was a fascinating book from one perspective titled normal accidents that was written.

There are only two ways I can imagine to defend against this type of situation in software.
1) Make very reliable software that does not have weird failure modes.
2) Refactor the problem so that even if huge swaths of your software fail something simple takes over.

To achieve software without weird failure modes takes a huge amount of testing and reliability work, or it requires software proofs.

Refactoring software so that huge swaths of it may fail looks like sandboxing or encryption.

Encryption of data in transit allows traffic across an unsecured medium and the traffic sent winds up being the traffic received. Completely removing the problem of securing the network in-between.

Sandboxing of software ensures that software that runs in the sandbox (assuming the sandbox is working) no matter what it does can not violate the system.

The great enemy of many of these approaches is the attitude that I need to get something done now, and all of this defense against the worst case scenario gets in the way of getting something done with software right now. That is they all cost too much.

So to get a solution deployed the cost needs to come down, or you need to use different software for critical things as you do for non-critical things.

My personal observation is the cost of complex software is becoming such that hardening existing software is more realistic than making special case software for critical software.

Might be a good place to

This might be a good place to start to get a sort of rough estimate. Now consider:

  • Something on the order of 50-70% of projects fail, so at least half of that code isn't actually in use.
  • Does that $40 cost per line of code include the entire lifetime of that line?
  • 17% of software project failures are so devastating that they threaten the entire company; if the company fails, it also eliminates the use of nearly all of their other code.
  • Open source code may survive the death of a company, so the previous isn't a given.

Good questions. I don't

Good questions.

I don't think any of those amounts to a factor of 10 reduction in the volume of software that can be created.

- Software maintenance is sometime estimated at about 1/2 the cost of the software.

So failure plus maintenance costs at worst reduce my estimate by a factor of 4 or 5.

More telling is the simple fact that there are a lot of dependencies in the creation of software. Which wind up with problems like you can create a baby with 1 mother in 9 months but by using 9 mothers you can't create a baby in 1 month.

For purposes of estimation in the large we can likely assume pipelining.

re good questions

"Software maintenance is sometime estimated at about 1/2 the cost of the software."

The "Target" brand of chain stores recently had a major breach in its part of the payments system.

What were its costs associated with maintaining the corresponding software? What additional costs did they externalize?

When civilization collapses from some software disaster, nobody will care much if you explain that it's because someone configured this or that incorrectly, failed to install the latest upgrade, or whatever. Collapse is collapse.

Poor Thinking.

If you do not think clearly and precisely and about reality it isn't worth thinking about.

I offered a swag, and if you are going to pick it apart on technicalities and raise a straw man I am sorry for you because I can't see how you will ever find the truth.

As for the failure of Target and it's threat to western civilization. For most of us the event is a non-event. The people who deployed the means of moving money around are responsible for the failure, and for eating the costs. The impact on the rest of us was minimal.

So far the financial system in the United States seems to be designed properly in the large with the institutions who goof up being responsible for their own failures.

We will never have a mathematically perfect society. Crap will happen. As long as the people who take the risk also take the failure that seems reasonable.

The worst case I see in a situation like that is that we get into a post world war 2 style rebuilding of civilization, where we have people who know how to make things and get things done having to start over again.

But the reasons why people choose to deploy poor software and the fact that people do in fact deploy that software is a far cry from the interesting question of how can we build programming languages to reduce the risk of problems.

Horses

In the late 1800s, we were suffering a major problem: our streets were filling up with horse poop and it didn't seem like our cities could grow any larger. Horses were just quite limited, and that limited us. People were asking questions: "how many horses can our cities support?" and the answers were not encouraging.

But then the problem went away. We were asking the wrong questions, it wasn't about "better horses", we just stopped using horses all together.

I feel like this question is basically that, let me rephrase it:

Is there any horse theory that considers absolute upper bounds on how many horses a civilization can actively maintain (as a function of population size)?

Are such upper bounds low enough that the limit should inform on horse design?

The original question becomes moot if we move away from PL and code. Whatever the upper bound is, we will never hit it, like how "peak oil" never pans out, there will just be something else to move onto way before we hit that end.

re: peak oil

until, well, the day of the peak.

We peaked in 1970 something.

We peaked in 1970 something. From wiki:

Hubbert's original prediction that US Peak Oil would be in about 1970 was accurate, as US average annual production peaked in 1970 at 9.6 million barrels per day;[3] however, after a decade’s long decline, US production rebounded hitting 9.2 million barrels per day in early 2015.

The only problem with oil is that it is still cheap. But anyways, we will never run out of it since we have so many other choices available (we will likely find a better energy resource before oil is uneconomical). But my point is that given that X has limitations does not at all mean we are limited by X.

with the possible exceptions of

h2o

and

o2

well, and c and n and h.

Even the speed of light is

Even the speed of light is iffy, we just need a wormhole to shortcut through space.

Now that's an interesting

Now that's an interesting point! Still, it seems hard to imagine what could replace a PL of some sort, since the means of manipulating information almost seems irreducible, where horses for transportation aren't.

Refinements of PL, like a move to more declarative or visual mechanisms for manipulating information could help, but what radical ideas exist to replace it wholesale?

Code as Material Metaphor

The focus on reading code, editing code are similar to our old approaches to computing hardware, where humans would directly observe and manipulate and repair switches. At some point, however, hardware got too small for this sort of manipulation and maintenance, and we switched to a throw-away culture: when a chip is broken, you just throw it away and replace it, and properties of hardware are generally observed indirectly. Without embracing this new relationship between hardware and humanity, many neat things like smart phones and hololens would never have been possible.

Code, of course, doesn't wear down or break in the same ways, and is easily manipulated in an expanded form. So there hasn't been as much pressure to escape the 'reading, editing' modes of interaction. But this doesn't mean we aren't limited by them. If we must read software to understand it, this inherently limits the scale of software we can comprehend. If we must edit software directly to repair it, this limits the scale of maintenance we can perform.

I have been pursuing a 'code as material' metaphor for a while now, where code is a substance that can flow, be composed, be cut, be manipulated and transported like other physical materials. I believe this offers an more scalable basis for interaction between humans and software, and will ultimately allow a lot more fine-grained software artifacts. The trick, of course, is to find materials with nice properties.

It is obviously ML, and I

It is obviously ML, and I don't mean the language (really there isn't even much debate about it), but that could be 20+ years out (hopefully, I like my job).

There should be a gap where ML and PL coexist somehow; i.e. trained models interfacing with hand written code to gain control as needed. But no one knows how to do that elegantly yet.

Luddism.

Luddism.

correction

i, for one, prefer to be termed a neo-luddite, not just a luddite. :-)

Post-nerdy neo-luddite. My

Post-nerdy neo-luddite.

My relationship to super-intelligent robots is basically the same I have to interstellar travels: I found them more exciting when I was a kid. Not sure why this excitement declined, but I guess it is a lack of tangible details. In case of the robots this led to discourses driven by proclamations which causes vehement emotions which I lack about this stuff for the same reasons.

BTW autonomous cars are often associated with horror in the popular imagination. Think about "Christine" from S.King. In "The cars that ate Paris" from P.Weir the cars got furious when the mayor burnt one. They still had human drivers but those served the cars passions. If there is a real chance of an evil spirit in the autonomous machines I might get interested again and help to unleash it.

Humand Complexity of Code

We should take two aggregates in to consideration.

  • M - executable machine code
  • C - human written code

Amount(M)/Amount(C) is growing with increasing level of the language. Likely, you are asking about C. If we ignore purely economic factors, the human complexity of the code depends on the technology. And it puts severe limitation on what a human could make informed decisions about.

For FORTRAN 77 program, the cost of understanding the code is O(A(C) * A(C)) because you do not know the context of line of the code. But for C or Pascal program (that follows structured programming), the cost is lower O(A(C) * ln(A(C))) because the code and data are structured in hierarchical constructs. It could be argued that human cost of understanding good OOP and FP (with use of DSL) code is O(A(C) * ln(ln(A(C)))) due to generic functionality. Hopefully the next generation languages will reduce it further to O(A(C) * ln(ln(ln(A(C))))), but there are obviously dimishing return as it would not increase size of program siginificanly, but it would significantly increase upfront eduction cost of learning new information organization principles.

Starting with OOP and FP biggest cognitive saving comes not from organizing information better, but from "abtracting from" it (or "reducing amount of required knowledge to make an informed decision"). So come microservices and other application decomposition principles. I guess we will soon see widespread generic applications that take other applications as arguments (ESB approach comes to mind here as step in this direction that is somehow adopted in the field).

re: human complexity

I like the spirit but I don't quite buy claims that source-to-instructions is seriously growing, or that there is a real complexity distinction to be made between algol class and fortran. The latter is an interesting claim but I don't see any argument for it.

Cost of understanding ~~ degree of independence of typechecking?

There's probably a sense in which it's the subset of code needed to typecheck a given line of code.

I like this idea. In the module system I'm working on, with independently typecheckable module-clusters but not (usually) independently typecheckable modules, something like this could be measured as the average module-cluster size per module.

Maybe in terms of a Scheme-like language, it could have something to do with dead code elimination.

Hmm, there's a missing term here: The complexity of the core implementation, on top of which the measured code is built. Every measurement would have to add this unknown constant.

It is nice to know that the

It is nice to know that the computer understands the problem. But I would like to understand it, too. — Eugene Wigner

I suggest that the purpose of typechecking is to enable an automated correctness proof, which is a different and eventually conflicting goal from making the program humanly understandable. So I'm dubious that the difficulty of type-checking reliably correlates with the difficulty of human understanding.

If the abstraction leaks then the difficulty of human understanding ought to depend on A(M) as well as A(C).

Typechecking and Comprehension

Among all methods for correctness proofs (abstract interpretation, exhaustive checking, etc.) typechecking has the important distinction of being compositional.

Compositional properties are useful for human comprehension: we understand a larger system by understanding smaller pieces; we know which changes can be made locally without impacting compositional properties of the larger system. Compositional properties correspond more or less to the locality and continuity features of physical systems, and are a good foundation for human intuition.

Not that we necessarily typecheck every compositional property. For example, capabilities can let us compositionally reason about connectivity (and hence security) even without types (as demonstrated in E and Newspeak). OTOH, most compositional properties could be modeled in a sufficiently expressive type system.

Of course, the five-to-seven rule for human memory also is relevant. If a function has a hundred distinct inputs, it is unlikely any human will ever grok it, regardless of whether it composes. We also need to include a factor for the size and complexity of type descriptors.

Type systems, afaik, impose

Type systems, afaik, impose reasoning based on context-free syntactic structure. Human languages are not usefully context-free and it seems reasonable to suppose that the structure of human languages is correlated with what humans find understandable. I suggest that once you get past very rudimentary stuff, human comprehension is hampered by purely context-free reasoning.

Types aren't context-free.

Types aren't context-free. We can name types, or infer types in a context. Inference corresponds to linguistic anaphora, and naming types corresponds to our larger vocabularies and ontologies of nouns or verbs. Of course, humans frequently limit the scope for anaphora, otherwise we start making too many mistakes (e.g. ambiguous antecedent), and the same is probably true for type inference.

Usefully, substructural types can provide temporal context (very useful for describing multi-step protocols). Types can also address spatial contexts, which may prove useful to the extent we develop programs for mobile agents or distributed orchestration.

Only a small fraction of types have the properties you're describing. In particular, purely structural types. And human reasoning does benefit from having these types at the bottom. They correspond nicely to physical systems or fungible market commodities - e.g. how can we use a 2x4 board? what kind of pipe do we need here? what is glue good for?

I observe that, even in languages that some call 'dynamically typed' and a skeptic might call 'unityped', programmers document and model domains in terms of sophisticated types. Even if a language doesn't typecheck all these things, I think the types programmers maintain in their heads and docs will more or less correspond to their comprehension of the system.

syntactic bottleneck

Clearly I didn't articulate my thought well enough. I did say types "impose reasoning based on context-free syntactic structure"; I did *not* say types are context-free (I'm not even sure what that would mean). Likely I'll find a different way to explain just as badly the second time around, but I'll try anyway. I submit that types force facts and inferences to decorate the syntax tree, creating a sort of reasoning bottleneck, and that this unnatural reasoning structure interferes with human understanding. Type inference is an interesting example; I'm inclined to view type inference as a liability, and it seems to me the syntax-shaped reasoning structure is part of the problem (not all, but part).

prescriptive vs descriptive types

You seem to be assuming that the types are prescriptive rather than descriptive - i.e. that the types are creating unnatural bottlenecks, rather than describing the natural ones.

In any case, 'reasoning bottlenecks' don't necessarily interfere with human understanding any more than would the absence thereof. Without delimiters for local reasoning, there is often too much to think about, which causes its own comprehension difficulties.

When working at scales of "how many lines of code can civilization support", we aren't necessarily focusing on types at granularity smaller than the module.

thought-provoking

You've got some most interesting points for consideration here.

You seem to be assuming that the types are prescriptive rather than descriptive - i.e. that the types are creating unnatural bottlenecks, rather than describing the natural ones.

Although it's possible I'm misunderstanding what you intend here, it sounds like something I'd agree with (other than, perhaps, a (minor?) quibble with the word "assume" — I'm saying I believe it to be so, which doesn't imply the incaution I perceive from the word "assume" (as in, "when you ASSUME, you make an ASS out of U and ME")).

I am, indeed, saying that the pattern of reasoning flow imposed by types is not uniformly the way people most naturally think.

In any case, 'reasoning bottlenecks' don't necessarily interfere with human understanding any more than would the absence thereof.

I'm inclined to disagree. Seems to me when programming in a system with "advanced" typing, the typing gets in the way; and from my own experience of just how difficult it is to write lucid code even without such interference, it's a serious long-term problem to have anything extraneous entangled with lucidity.

Without delimiters for local reasoning, there is often too much to think about, which causes its own comprehension difficulties.

I tentatively agree that there ought to be a way to aid the programmer in keeping track of things. To what extent that implies some form of local delimiting, and what form, I'd treat as open questions. It ought to work smoothly with lucidity, though, in a sense that it seems to me conventional type systems do not.

When working at scales of "how many lines of code can civilization support", we aren't necessarily focusing on types at granularity smaller than the module.

As I recall, we got into type systems here over the thought-provoking suggestion that human comprehension would correspond with type-checking code. That correspondence doesn't seem evident to me, though it's admittedly not quite the same as the relationship between typing and lucidity. And before that, we had the thought-provoking suggestion that human comprehension would correspond, broadly, with amount of human-readable code according to some simple asymptotic relations. It seems to me that if type-checking adds a constant factor to each module, (1) that constant factor matters to the overal per-civilization question even though it's filtered out by the asymptotic classes, and  (2) the difficulty of understanding the relationships between modules may increase with the difficulty of understanding each of them individually, so that that constant factor on each individual module could produce an asymptotic effect on the collection of modules. Of course, if you think type-checking improves the lucidity of each individual module, it works the other way as well.

programming without reasoning

the pattern of reasoning flow imposed by types is not uniformly the way people most naturally think

You say you object to types on the basis they interfere with our 'natural' ad-hoc mix of durnken logic, heuristic lucidity, wishful thinking, and gut-check intuitions. On the basis of your arguments, you should be objecting to reason itself. Reasoning, at least of the non-fallacious sort, is not "the way people most naturally think".

Rather than deride this, let's take it seriously. Without reasoning, we can still use experiment and feedback and simulations and intuition and habit and best practices to guide software development. Programming without reasoning can be exploratory, iterative, effective, and efficient.

My estimates:

Compositional properties prove useful even if we aren't explicitly reasoning about them. They will sink into our intuitions - e.g. in a capability secure system, experienced programmers will 'just know' a lot about connectivity of the system, even if they don't know why they know. This might be limited to simple properties.

Types help insofar as they provide documentation and fail-fast feedback when we get something obviously wrong. However, if types become too subtle, such that we don't comprehend them concretely, they might interfere with our flow. A type-erasure property might be sufficient to guard against excessive insinuation of a type system - i.e. a guarantee that removing type annotations doesn't change observable behaviors for a correct program.

I was interested in

I was interested in rational discussion. I took your ideas seriously. Since you evidently start by assuming I'm an idiot, rational discussion isn't an option.

I've not made such an

I've not made such an assumption.

I do believe that you have essentially been arguing against 'reason' (a mode of thinking and understanding), even if that's not how you want your position to be described. Types are useful for valid reasoning. Neither types nor valid reasoning are very natural to humans. The list of 'natural' human biases and fallacies is very long.

A position against reason doesn't make you an idiot. Really, there are plenty of smart people who would prefer to eschew reason in favor of more artisanal or conversational or 'agile' approaches. And there are some good arguments for it, such as: humans suck at reasoning more than a few shallow steps.

I've not made such an

I've not made such an assumption.

I'm sad. I wanted to believe you when you say this. I do hope you believed it when you wrote it. Oh well.

I do believe that you have essentially been arguing against 'reason' (a mode of thinking and understanding), even if that's not how you want your position to be described.

You're mistaken.

My best guess (such as it is) is that you're effectively equating reasoning with types. I don't buy in to that equation. One thing I find the Curry-Howard correspondence useful for is understanding the weaknesses of type theory. Curry-Howard analogizes a step of a proof, on the logic side of the correspondence, with a context-free syntax rule on the calculus side of the correspondence. I wondered for some time over the apparent discrepancy that computations halt while proofs don't; but computations don't correspond to anything equally interesting on the logic side. The evident fact that a proof can continue without bound corresponds to the also-evident fact that any λ-calculus term can be embedded in larger λ-calculus terms. Yet, for a proof to be lucid we want to understand clearly how its pieces fit together — corresponding to the context-free syntactic structure of a program — while for a program to be lucid we want to understand clearly what it does, which is in an entirely different direction than anything of interest in the proof. (I've heard the suggestion that computation should be compared to proof transformation, and while technically valid I don't find it at all compelling; proof transformation is comparatively esoterica, while computation is the whole point of a program.) This is the origin — well, okay, part of the origin — of my suggestions that  (1) types reason along the context-free syntactic structure of programs and  (2) types ultimately interfere with lucidity of programs. My objection to types isn't that they reason, but that their reasoning, and their erstwhile lucidity, are both ultimately focused in a different direction than what a programmer should primarily care about. A PL designer wants their type system to be lucid, and they also want their programs to be lucid, and these two kinds of lucidity are pulling in different directions so that trying to do both at once is likely to result in doing neither.

I'm fond of reasoning. I'm not fond of types.

I want programs to be lucid. I don't think you can do that without reasoning about them; indeed, it seems to me lucidity must ultimately be a form of reasoning. I don't know the form of reasoning needed, but if it's working at cross-purposes with program lucidity, it's clearly not the right form. And if it isn't working with program lucidity, it's going to end up at cross-purposes.

All things considered, I doubt we even mean quite the same thing when we say "reason".

I've heard the suggestion

I've heard the suggestion that computation should be compared to proof transformation, and while technically valid I don't find it at all compelling; proof transformation is comparatively esoterica, while computation is the whole point of a program.

For reference: Propositions = types, proofs = programs, proof simplification = program evaluation.

Verily and forsooth

Yup. For all you folks following along at home. :-D