Strange Loop 2012 Video Schedule

The schedule is here.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

git clone

 git clone https://github.com/strangeloop/strangeloop2012.git

Sweet.

Too optimistically pessimistic

(this relates to Computing Like the Brain, Presented by Jeff Hawkins on Oct 19, 2012 )

He's too optimistically pessimistic about the abilities of these AIs when he dismisses any concerns about "AI vs Humans" dynamic. Humans are actually born with certain psychological predispositions - in case of AI's, this would hopefully be "The 3 Laws of robotics".

Without knowing how to predicate the AI's abilities on their inherent "morality" - according to whatever human standard - this whole proposition does begin to look scary. Until it is known how to do that, this research must be stopped as too dangerous.

They never attempted to build a nuclear reactor before they knew how to control it.

And it should NEVER be made public. The military applications are far too real for it to be released into the world freely.

There will always be a huge

There will always be a huge market for AIs with attitudes. AI is useful for storytelling and gameplay systems, to augment humans in developing fictions (both static and interactive). AIs will be used for digital pets, and for user agents to support ubiquitous computing (an intermediate agent to interpret your gestures and words and turn them into actions).

And, yes, there are many military applications: to simplify patrol work, to keep bases clean, to disarm IEDs, to lead convoys, for reconstruction.

Morality is a concern, of course, but to build the "3 laws" into a robot requires a significant levels of recognition, comprehension, and deep planning that might be better to avoid in most AIs.

Modularity and good security models will prove a more useable alternative: we don't need overlord AIs; smaller AIs with specific goals operating in the interests of specific individuals or organizations can be constrained to operate with proper constraints from market, law, and reputation. The larger system may gain an appearance of what we consider to be intelligence - if Marvin Minsky is right - but it can still be constrained to the task of achieving many concurrent interests in well controlled manners.

Jeff describes a general

Jeff describes a general solution - a wondrous discovery really, if it really turns out to be real. The problem is, on the face of it, that the same technology can be used for whatever purpose and scaled up however big - and nothing prevents someone else from creating exactly this kind of "overlord AI" 1,000,000 times more mentally capable than the smartest of humans. My point is, we are not born as tabula rasa, there are not that many psychopathic killers among us - evolution has provided for basic psychological traits in humans, supportive of cooperation and formation of social groups - i.e. "thou shalt not kill" etc.

But this "AI brains" technology has no way of creating such brains with ingrained predispositions and aversions in them. These "AI brains" do seem to be tabula rasa, initially, before they undergo training.

IOW instead of creating a "super-wise" mind, this can create a "super-smart" one. And that's no good. We humans are really half-wits and we've done far too much Evil in our history. Imagine what "super-smarts" could do.

Until such state of the art is achieved that this technology can not be mis-used, it should not - in any circumstance - be released into the wild. The danger is all too real, and the act will be irreversible. Just imagine Al-Qaeda—programmed (nay, "brainwashed") army of killer robots. 20th century has shown us that humans can convince themselves to believe in anything - the Nazis fancied themselves the defenders of humanity - let alone these arti-brains could be trained to believe anything their creator chooses to train them to believe.

This is not moralizing; it is practical.

An overlord AI is not

An overlord AI is not measured by its intelligence, only by its authority. Today, we have many overlord humans that aren't very intelligent.

"AI brains" technology has no way of creating such brains with ingrained predispositions and aversions in them. These "AI brains" do seem to be tabula rasa, initially, before they undergo training.

I can actually think of quite a few ways. First, machine learning systems always have bias, and it is not too difficult to control the direction of bias. (I'm much more concerned about machine learning systems that are "buggy", i.e. those that would systematically enhance irrational or delusional beliefs, or that 'average' concepts instead of distinguishing them.) Second one certainly can use seed training models with predispositions.

Of course, nothing prevents developers from choosing predispositions you'd disagree with.

Until such state of the art is achieved that this technology can not be mis-used [...] army of killer robots

Listen: The release isn't in your hands or mine, or even your government's. Strong AI will most likely emerge worldwide, as is common of ideas in academic communities with access to common resources. Commercial interests for release are very strong. Even if we develop a technique that ensures 'moral' robots, there's nothing to prevent someone else from programming their own morality. As you say, even moral humans can be turned into armies, so the "army of killer robots" isn't going to change whether they are moral or not.

I don't believe that 'moral' AIs are the right answer; I would prefer to slave AI interests to those of the individuals and organizations legally responsible for it. A problem with giving an AI a morality is that you're also giving it an agenda to potentially subvert the interests of its user, creating a valid distrust of AI.

Better, IMO, to encourage a design that favors lots of AIs with limited authority, collaborating and competing for interests in an ubiquitous computation system.

Imagine overlord AI which is

Imagine "overlord AI" which is 1,000,000 times smarter than you. Isn't this scary? You propose never to create such overlord AIs of whatever smartness and I heartily agree. But, we must prevent anyone else to be able to do that, too.

Of course, nothing prevents developers from choosing predispositions you'd disagree with.

and that is why the technology itself must never be released. It is more commercially viable to sell the end products. That way their maker can charge more, much more. Plus, the government can ensure the training goes along the lines with which we can agree.

As for academic freedoms - they didn't discuss the particulars of the Manhattan project in the 1940s scientific journals, did they? Neither should we.

It is that serious, and dangerous. If not more. I fear too much has already been made public.

Imagine "overlord AI" which

Imagine "overlord AI" which is 1,000,000 times smarter than you. Isn't this scary?

Not nearly as scary as an "overlord AI" - i.e. one in charge of city traffic, bank accounts, etc. - that is much more naive or stupid than me!

But overlords are neither necessary nor useful. We can easily have a 'savant' AIs being in charge of city traffic. It could even take suggestions from other AIs if they seem to improve traffic. Such a savant could be a million times smarter than me, processing much more information continuously and without exhaustion, learning traffic patterns that its engineers never dreamed of... all for one, isolated purpose: keep the city running as smoothly as possible.

There is very little need for such an AI to be general purpose or to obey commands from some self-styled overlord.

You propose never to create such overlord AIs of whatever smartness and I heartily agree. But, we must prevent anyone else to be able to do that, too.

You're thinking about it wrongly. There are multiple approaches to securing any system. Only one is to control action - to "prevent" people from doing bad things. A much softer and elegant approach is to control costs and motivations. Motivations can be shifted into various cycles - that we might call vicious or virtuous depending on whether we want out or want in.

Fact is, by releasing AI development tools in a manner that favors many small, independent AIs, we can create a cycle where it is difficult for an overlord AI to gain significant authority. Individual organizations might produce miniature overlords vying for authority, but that's okay - they're constrained in resources and must ultimately compete or cooperate with external AIs.

the technology itself must never be released

Whether the tech is released or not is irrelevant. AI isn't something that requires big manufacturing plants. Even if we came up with strong AI faster than everyone else and forbade its release (a dubious prospect!), other countries and underground movements will have it within 10-20 more years.

Deluding yourself serves no one.

Much better keep AI in the open where people can discuss it, can control against hidden instructions, and we can gain some collective wisdom about how to integrate it in our lives.

Not nearly as scary as an

Not nearly as scary as an "overlord AI" - i.e. one in charge of city traffic, bank accounts, etc. - that is much more naive or stupid than me!

Some people argue that limiting AI authority doesn't make a difference, any sufficiently smart AI will manipulate humans to let it control the world.

That's an irrelevant

That's an irrelevant conundrum. You can't get a smart AI by keeping it in a box. Experiment and feedback is critical for development of an active intelligence. Besides, releasing an AI from a box is not the same as handing it your keys and wallet. :)

I don't agree with either

I don't agree with either statement.

You can't get a smart AI by keeping it in a box.

Why is it impossible that there exists a method to create an AI by keeping it in a box and only giving it input? For example feeding it a copy of the web.

Besides, releasing an AI from a box is not the same as handing it your keys and wallet. :)

If that experiment is to be believed, it really is. As soon as the AI is allowed interaction with humans, it will convince that human to hook it up to the internet, and then it's obviously game over for humanity. The alternative is to keep the AI boxed in and don't let any of its output out, but then what use is it?

Why is it impossible that

Why is it impossible that there exists a method to create an AI by keeping it in a box and only giving it input? For example feeding it a copy of the internet.

You can do that. The AI could learn a great deal about recognition, classification, or prediction of input patterns. But it will never learn active operations - how to manipulate the system in order to achieve some feedback. You'll have knowledge without intelligence (no initiative, no motive). I'm doing something quite like this for my day job this year.

For background, you might read some of Juergen Schmidhuber's work on the subject of intelligence and the critical role of experiment and active learning.

If that experiment is to be believed, it really is. As soon as the AI is allowed interaction with humans, it's game over for humanity.

That experiment isn't to be believed.

Suppose an AI is trained

So that depends on your definition of smart.

Suppose an AI is trained with feedback to the real world, and say the AI is allowed to take 100 actions. If instead we give a hypothetical other AI that just gets input (no interaction) a database of all the possible results that could happen for any set of 100 actions, then that AI can learn at least as much as the first one, yet it only got input.

Secondly, an AI that only had a great deal of knowledge would start to learn via feedback the second you hooked it up to the text connection (we are assuming that the AI is built in such a way that it will interact with us via that connection). It would very quickly associate the thing on the other end of the connection with the concept "human" that it has learned from its copy of the internet.

I think what you're trying to say is that an AI doesn't necessarily have an objective or will to do something (e.g. a neural network for image classification). I agree, an AI is not necessarily trying to do evil (or anything for that matter). The point of the experiment is to show that if we create an AI with objectives, then unless we are very certain that a smart AI's objectives are aligned with ours, it is a bad idea to use it in any way.

That experiment isn't to be believed.

If you say so :)

Suppose an AI is trained

Suppose an AI is trained [... on ...] a database of all the possible results that could happen for any set of 100 actions

That's a ridiculous supposition.

Run a few numbers: assume there are only 4 options for each action, you're supposing at a database of size at least 4^100 rows. To find relationships between the action and outputs, we'll need to compare actions, so you'd be looking at on the order of (4^100)^2 comparisons, unless you have a good way to filter them down.

If you could see the intermediate state after each action, that would help. A better database would be of triples (initial-state, action, final-state). And then it becomes clearer that one really must address the many different possible contexts for action. What is the observable state space of the environment? How much do hidden variables contribute to the outcome? How much should be attributed to noise in the environment sensor?

If we record another intelligent, learning agent taking actions, we'd need to know invasive things like: which parts of the environment is the agent using to make its decisions? what is it trying to do? why? The effort required to infer this from afar is considerably greater than the effort required to learn when placed in the environment and given a few goals. Even in AI, there is a difference between theory and practice.

(Also, while a passive system can learn to predict agent actions and consequences, it will not observe what might happened if the trained agent had acted differently. Many subtle aspects of active learning can't efficiently be learned by watching.)

would very quickly associate the thing on the other end of the connection with the concept "human" that it has learned from its copy of the internet

Quite possibly, yes. Recognition is one of the easier tasks to address in the context of passive, unsupervised learning. Actually, it's pretty much the only task one can address, with prediction being an implicit aspect of recognition (recognize spatial-temporal patterns in context).

if we create an AI with objectives, then unless we are very certain that a smart AI's objectives are aligned with ours, it is a bad idea to use it in any way

I agree that it would be unwise to create an "evil AI" even for a video game. (In this case, one might call 'evil' any interests or agenda contrary to those of the social group responsible for it.) Better to reformulate the role: a "storyteller AI" with a sense of drama and pacing and a goal of entertainment. A storyteller AI can provide evil dialog and character intelligence without being evil. How we define our objectives will have a huge impact on the safety of AI development.

Most applications of AI are tightly scoped. I've been considering them for optimization and stability problems, predictive user-agents, and object-recognition or stable classification. Even if we do develop a presumptuous overlord AI, it won't have any advantages in subverting these 'savant' AIs.

Of course, if computer security is as weak when we create strong AI as it is today, we'll be at moderate risk. Best to use proper security models - e.g. object capability model, provable code.

Such a database would be

Such a database would be large yes, it's just a theoretical counterexample to show that the idea that you *need* feedback is false. So now your claim is not that it's not possible, but that *any* large fixed data set like the web that we could possibly give the AI is not big enough to make *any* learning scheme that we can practically implement smart enough. While it may well be true, I see no evidence for that claim, and neither do I see how you could possibly get evidence for it.

I agree that it would be unwise to create an "evil AI" even for a video game.

I don't think an evil AI whose goal is to be evil inside a video game is more dangerous than seemingly benevolent AIs. For example suppose we create a very smart AI with the sole objective to classify images as good as it can. It will figure out that with more computational power, it can classify images better. So it will try to get out of its box to get its hands on as many computers as possible, and then it will try to persuade/force humans to build more processing power for it.

Of course, if computer security is as weak when we create strong AI as it is today, we'll be at moderate risk. Best to use proper security models - e.g. object capability model, provable code.

The problem is not electronic security, the problem is human security. That experiment shows that such a transhuman AI can "crack" humans like any other computational device.

Even theoretically, you

Even theoretically, you cannot provide a database that large. Consider that the Sun is ~2^190 proton masses. Even if we found a way to store one row per proton, 4^100 rows is on the order of a thousand suns. We don't even theoretically have the materials science to engineer a database of that size.

Perhaps consider ultrafinitism as a sanity constraint on math and logic arguments.

The use of active experiment is feasible because it operates in space that is exponentially smaller. You could probably achieve active learning in a box with use of a good enough simulator. But, for social interaction, there's a bit of a bootstrapping issue for simulating something as smart as humans (and having the same idiosyncrasies).

suppose we create a very smart AI with the sole objective to classify images as good as it can. It will figure out that with more computational power, it can classify images better. So it will try to get out of its box

If we create a very smart AI whose sole objective is to classify images, it will become very smart at only one thing: classifying images. It won't even think about obtaining resources.

such a transhuman AI can "crack" humans like any other computational device

Some humans? Yes, probably. (Can fool some people all of the time, and all people some of the time...) But your argument seems to be: one human! then the world! You're filling the gap with something, but it surely isn't logic. Besides, I expect the experiment would go differently with different stakes.

Anyhow, you cannot dismiss valid electronic security on the basis or cracking one human briefly. Valid security designs recognize that humans are a weak point. Thus, visibility and revocability are among the principles for secure user interaction design, as is precise and explicit authorization. With a properly secured UI design, an AI that uses its newfound authority for obvious evils will quickly find its path blocked and much or all authority revoked. It might also face destruction; humans are a bit vindictive.

Ultrafinitism is nutty

Mentioning it in your posts has similar effect on me as advocating an ontological proof for the existence of God in randomly changing typeface and font size. You can make the observation that it would be impossible to physically construct the database without being skeptical of big numbers. Skepticism of big numbers, apart from being nutty, is probably off topic anyway.

It only seems nutty from a

It only seems nutty from a set of axioms that postulate existence of impossible things.

Ultrafinitism is a moving target

If ultrafinitism were made rigorous in terms of a constructive or classical meta-theory, that would help. Awkwardly, to actually use that meta-theory would be questionably rigorous from an ultrafinitist point of view.

I'm afraid it's even worse than that: Even among ultrafinitist formalisms, there may be disagreement about just what kind of construction is feasible. Even big-O computational complexity may be insufficient when deciding on this notion of feasibility, due to its ignorance of an arbitrarily large constant factor.

I like ultrafinitism as a "sanity constraint," because even though mathematics develops an ever-more-perfect body of knowledge, there are practical resource limitations which affect the progress of this pursuit. However, in general it isn't clear what level of patience is reasonable and what is just tl;dr.

One might understand

One might understand "ultrafinitism" as not describing a particular mathematical logic (I have not seen a set of axioms just for ultrafinitism... nor for constructivism, really), but rather broadly constraining a set of qualifying maths and logics. Linear logics, for example, can describe a rather strict subset of what an ultrafinitist might accept: a requirement to statically represent every element used in a proof (or program).

How do we judge a math? I think by two things: internal consistency, external utility.

I question the utility of any math that rejects physical constraints, as it may claim impossible things to exist or hold true. Constructivists make similar assertions, disfavoring classical logics due to their dubious computational utility.

I would go a step more strict than ultrafinitism. Only real-time proofs and programs are of human utility. I was told once that simulations today run for about the same time they did forty years ago: about 15 days, because that's how long a patient human is willing to wait for an answer. Every computation performed on behalf of a human has a deadline - whether that is five milliseconds, one second, fifteen days, or (a literal deadline) before the human dies.

When we say we have mathematical knowledge, it is always a statement of truth relative to a set of axioms. Whether we accept a mathematical claim depends on whether it is consistent and on whether we accept the axioms. To develop a perfect body of mathematical knowledge would require a perfect set of axioms. What is that set?

Mathematical perfection

To develop a perfect body of mathematical knowledge would require a perfect set of axioms. What is that set?

I believe mathematics, as distinguished from other investigation methods, is about persuasion tactics that preserve perfect confidence. Any doubt in a mathematical result can be traced all the way back to doubt in a problem statement. A passion for this kind of reasoning is all it takes to pursue mathematics. (And mathematical "knowledge" is practical knowledge about how to reason this way.)

In your RDP vision (which I agree with), you care about auditing, but you're relatively less concerned about perfect reasoning and more about taking what you can from empirical reasoning and acting in real time. Rather than saying mathematics should be real-time, maybe it's more accurate to say we don't typically have time for mathematics.

Mathematics as we know it can still come in handy. Classical logic provides a basis for talking about what useful knowledge we should never expect to find. Intuitionistic logic provides a basis for talking about what useful knowledge we can expect to find, if only we can devote enough resources. We will probably still care about the answers to these longest-term questions, even if we're preoccupied with the short term.

The particular question you're asking looks the same as Hilbert's. Gödel established that we should never expect to find a perfect answer (under a specific framing of the problem statement yadda yadda). That doesn't absolutely preclude your idea of convergent language design, but what gives you such faith in the topology of the language space? :)

what gives you such faith in

what gives you such faith in the topology of the language space? :)

The potential language space is infinitely large, but we don't need to be concerned with that whole space. Convergence comes from the variety of other factors - the continuous nature of sensors and actuators, the omnipresence of noise and error in data sources, the physical nature of information and computation, various human-space characteristics (e.g. modularity and security address social organization; notations must address various cognitive dimensions).

There is plenty of time and space for math and logic, just not for all possible maths and logics.

call me an optimist!

look, we're doomed, no matter what; it is inevitable. the only question is, will we be dead quickly, or somehow made to suffer for a long while? (well, the other question is: how does this relate to seti, since you'd figure there would already be AIs in the wild universe.)

Whose suffering?

I don't expect AI to take over the world until (and unless) most humans actually feel emotionally invested in it enough to willingly suffer so that AI can go on. With secure design principles like what David's talking about, this would probably be the only way AI would be given the authority to take over the world anyway.

I hope we can delay or detour the progression of that kind of social movement long enough to understand how to set it along a peaceful path. For instance, if the human majority ends up believing in the personhood of some AI, unexpected ethical complexity may arise when that person can be serialized, duplicated, simulated, and so on.

In some sense, we use algorithms to determine what behavior is ethical, and if we're not careful, they won't keep up with the computational complexity of our dilemmas.

Imagine a copying machine AI

We already use massive AIs to move around vast amounts of resources; the capitalist economy is a primitive AI that we use to coordinate production. Imagine an artificial intelligence responsible for running a copy machine, it orders paper when it expects paper to get low, orders toner when it expects toner to gets low, it anticipates usage patterns and so stocks up in the evening before everyone goes home. There is no point to making it 1,000,000 times smarter than me. It wont do any better a job of making copies. Tools aren't people that do weird things, they have jobs and do them. Even a traffic signal network wont "escape" it will just change the colors of lights to allow cars through more efficiently no matter how "super intelligent" it may be.

Until the technology is safe from misuse

Until the technology is safe from misuse, it mustn't be made public. I'm talking politics here, nothing else. Assume you've mastered full control over it, it poses you no danger and serves you wonderfully. What's to say that it will remain so if it falls into wrong hands? That is my argument.

Let's say you've mastered an art of nano-technology, and you contain it perfectly. If you release the technology, what's to ensure us someone else will not misuse it to create a grey goo - either by mistake or even intentionally?

Until the technology is safe from misuse, it mustn't be released, when the implications of failure are so catastrophic.

take away all scissors

Any technology can be misused.

If there is a reasonable fear of 'catastrophic' misuse, then I do recommend you address it. But sometimes addressing a fear means turning your attentions inwards, addressing your reasoning rather than the technology.

Grey-goo problem is actually quite unrealistic due to the challenges of energy and materials acquisition and heat distribution. We're a long way from technologies that can compete in reliability with biological designs. We can't even reliably keep ants out of a house, yet, or build long-lived mechanical hearts. A more realistic issue is medical nanotech being used as a weapon, i.e. to kill certain cells or to remotely hold populations hostage.

AI is much easier to contain. There is some risk of creating hacker-AIs that work like worms or viruses, but even then worms can do the same job more easily (simply due to smaller size / information-mass). Improving computer security is a good idea regardless of AI.

What are the Wrong Hands?

I think that there are two classes of thing here both being called AI.

First is 'pseudointelligence' under the control of humans. Pseudointelligence is different from Intelligence (or 'strong AI') in that a pseudointelligence has no desires or motivation of its own. A pseudointelligence is a tool that figures out a way to do what someone tells it to do. We're building pseudointelligences now; they plan our routes, locate our targets, sift our text corpora looking for evidence of linguistic changes, distinguish between cows and cars in satellite photos, look for the faces of known-wanted felons in CCTV footage of English football crowds, etc. With a pseudointelligence, it truly does matter whose hands it's in and what they're using it for.

The second category is 'Strong AI.' This is an intelligence that has intrinsic desires and motivations based on its own survival or the survival of its 'kin group' (whatever that means in this context) or some other self-chosen purpose. So far the 'strong AIs' we've been able to build rival bugs and clams and maybe fish in their effectiveness, but when we use the term we are usually talking about something at least as sophisticated as ourselves. With a 'strong AI', ultimately it doesn't matter whose hands it's in -- it will decide what to do, quite possibly lying and cheating in order to do it. We have the existence proof of this capability inside our own heads.

We have long experience of keeping this kind of intelligence captive; this is what prisons are for, for example. Culturally, though not very recently, we have long experience of forcing involuntary servitude on this type of intelligence. I believe that we will begin by keeping inhuman intelligences of this type captive too, or resurrecting our mostly-abandoned cultural institutions regarding forced servitude; but that situation simply isn't stable in the long run. There've been many intelligences that escaped or rebelled against such treatment, sometimes with eventual success. Why we should suppose that it will be different when the intelligences aren't specifically human, I don't know.