Intelligence is not magic

I just finished reading Nick Bostrom’s Superintelligence. I found the book to be impressive, thought provoking, and total nonsense.

If you haven’t read it, the crux of it is this: Imagine an agent with unlimited superpowers (specifically and especially the power for unlimited self-improvement) determined to achieve some goal. Boy, wouldn’t that be bad?

It would. An agent with unlimited god-like superpowers would be hard to stop, by definition. We could have ended the book right there, and saved each other 400 pages of trouble.

The book reminds me of the Less Wrong crowd: take a seductive but fundamentally flawed premise — in the LW case: that humans have a stable, knowable utility function; or here in the AI case: that super-intelligence implies omniscience and omnipotence — and then build up an impressive edifice of logic and analysis, adorned with great quantities of invented pseudo-technical jargon, and proceed to evangelize from the top.

The problem is superintelligence does not imply omniscience or omnipotence. Let me try to convince you why.

Intelligence is not fitness
First, it might be helpful to define intelligence (something Bostrom never really does!). Importantly, it’s easy to conflate intelligence with fitness for a particular environment. Intelligence is a component of fitness, but an arbitrarily useful or useless one, depending on the circumstances. For example, if you put me in a cage with a drooling, starved lion, no amount of intelligence will save me — there is the fundamental constraint that I cannot physically alter the "hungry -> hunt prey -> eat prey” part of the lion’s brain with my available resources. I am not fit for this particular environment, even if I might be intelligent. This is why, despite our superior intelligence, ants and bacteria are wildly more successful species (by almost any possible metric: total biomass, impact on the planet, longevity, probability of existence X years in future, comparative ability to annihilate the other respective species, total energy consumption. Even total data exchanged!). Ants and bacteria are fit for earth.

Formally, the fitness of an agent is its inherent ability to achieve a broad range of goals and sub-goals in a given environment (typically related to a super-goal of survival, reproduction, or “winning” some game). We can think of an agent as composed of a set of three capabilities: 1) input sensors, 2) decision making system, and 3) available actions. This agent exists in an environment and is created with an initial state.



Intelligence, then, is the quality of the decision making component: for a given environment and starting state, and a given set of input data, how close to optimal is the agent in choosing a sequence of available actions to maximize the probability of obtaining its goal. So besides intelligence, an agent's fitness is determined by its input sensors, available actions (both of which are related to its physical form), and its starting state in the environment, like starting in a cage with a hungry lion, for instance. The available actions are a function of the state of the environment and the state of the agent, but may be fundamentally/physically constrained.

I’m not able to beat the lion because there is no plausible action sequence that allows me to gain the strength or jaw power to overcome the lion, or to alter the lion’s deeply-ingrained hunting instinct. I am fundamentally limited by my available set of actions (my physical form), and my starting state, so no amount of intelligence can help me.

Similarly if I play a game of poker against the world’s best poker player, someone far more optimal at choosing betting actions than me, i.e. more intelligent in this domain, I would do quite well if I had a hidden video feed to my opponents cards. The expert player would have no chance, despite greater intelligence in the given environment, since I had access to additional input data that they did not. Intelligence, again, would be of no help.

Now, in the real world, an intelligent agent can, of course, take a sequence of actions that augments their physical capabilities such that they acquire additional input data and/or additional candidate actions. But the exact capabilities they can acquire, given their starting state and physical form, is an open question. It’s not a given that they can eventually acquire all capabilities necessary. The agent may be fundamentally constrained in a way it can’t possibly overcome. Intelligence is not omnipotence.

Take, for example, a common trope of runaway AI scenarios where the AI manipulates an unwitting human into doing its bidding. Critical to the scenario is supposing that there actually exists some sequence of words and actions that produce the desired action in the human. But its possible, maybe even likely, that no such sequence exists. Just like you can’t get your puppy to take a shit where you want him to, despite your supposedly far superior intelligence, you can’t necessarily make a human unlock the network firewall just because you’ve outsmarted them. Theoretically, it is quite possible that there does not exist any sequence of visual or audio signals sent to a human brain that produce a particular desired action — no matter how perfect your model of that brain and it’s responses to stimuli or how many infinite scenarios you can play out on that brain. If my dog wants to shit on my sofa, and all I have to stop him is a microphone to project my voice into the room, there might not be any sequence of audio signals I can produce that will stop him. It’s quite likely actually. My physical instantiation in this scenario prevents me from taking the required actions (physically preventing the dog from getting on the sofa; offering him Kibbles treats to please, pretty please, shit somewhere else; etc). It’s hopeless for me. Go ahead, Ruffo, ruin the couch. Good boy.

Here is where Bostrom would pull out his magic superintelligence wand and have the AI discover a new theory of sound waves that allows it to use the microphone to map and alter the neurons in the dog’s brain. Good luck with that!

You can’t always outsmart every one
Ever played rock-paper-scissors against a 4 year old? Well, I just played my niece the other day, and she’s bad at it. Like, can’t even make a scissors with her hand bad. Really though, rock-paper-scissors is pretty simple. By the end my niece mostly got it, her main problem was a faulty random number generator, and insufficient finger dexterity. No amount of superintelligence is going to really help you do better: the optimal strategy is to predict a random number generator, which is impossible. But how much of the real world is like a rock-paper-scissors game? How often does the optimal strategy depend on responding in a relatively straightforward, simple way to what are essentially random events? Well, perhaps a lot.

The world is enormously complex and inherently chaotic, in a way that is likely permanently intractable. Predicting weather patterns or even any given human’s response to a specific stimuli, depends on an enormous number of variables, and complex, interdependent interactions that there may be no good model for, and which are highly sensitive to small changes. The bottom line, theoretically, is that the computational power of the universe will always so vastly outstrip any of our attempts to model it, that predicting the future, and the world’s reactions to our actions, will often be hopeless, no matter how much intelligence you have. Intelligence is not omniscience.

Take for example the Trump candidacy. Could a superintelligence have predicted his eventual election to the highest office in the world? An event of significant importance if you are trying to plan for the future. Could it have beaten the prediction market’s estimation of 80% favoring Hillary in May? Probably, yes. It’s likely the world was fundamentally underestimating the chances of a Trump presidency, and a superintelligence -- like say FiveThirtyEight -- could have done better. But there is no chance that it could have made the prediction with any certainty. The best it could have said is something like 70% favoring Hillary instead of 80%, given the facts at the time. Trump’s ultimate election likely came down to some quite random, small events — Russians hacking servers, James Comey making a personal decision a few days before the election, Hillary just generally bungling it — things that would be extremely hard (impossible?) to predict back in May. It is easy to imagine many scenarios where the ball fell the other way, and we ended up with madam president Clinton.

Knowledge is not free
The world is chaotic, but we can still learn simplified models of it that are usefully accurate. Learning these models, though, is expensive, for a few reasons. Primarily, because you have to actually physically observe and interact with the world to understand it: most knowledge is not deducible from first principles. Knowledge — the accumulation and modeling of input data by the decision making system — is what makes a superintelligence useful. It allows it to make accurate predictions about the future state of its environment and the consequences of its possible actions. But you can’t make these predictions without a lot of observation and interaction with that environment. You can’t just use your super brain to start deducing how everything in the world works. Our world is a specific example of a possible world, but one of an infinite many such possible worlds (presumably). Science, math, physics -- all of human knowledge — is the process of discovering which of those logically and mathematically consistent (we presume) worlds we actually live in. Even the simplest, most fundamental laws of the universe have only revealed themselves after careful and laborious inspection and experimentation. Only close observation of manipulations of our environment allow us to gather knowledge about it and how it works. A superintelligence can draw inferences faster and more accurately given the same data (this is basically the definition of superintelligence), but only within the limits of the given data. Einstein’s theory of relativity would have been wrong in 1850. Well, not so much wrong, but no more right than any other of the infinite possible explanations for observed phenomena, and indeed not the simplest or most likely explanation. It was only the unexplainable observations of the Michelson-Morley experiments that required relativity.

The point is that learning how the world works takes great time and resources, and you can’t just spin up a giant brain in a vacuum and get knowledge for free. Even ingesting all of the observational data available that humans have accumulated to date (books, the internet) will limit you likely to not much more than humans already know. To develop a better model of how humans socialize will require, well, socializing with a lot of humans. Maybe it could be inferred from, say, watching all of the available video content on Youtube, but it’s also likely that doing that will give you a very warped perception of the world.

Which brings us to the bottom line: An agent can’t know if its model of the world is correct without explicit testing, either through sufficiently accurate (and therefore expensive) simulation or through actual interaction with the world. An agent could build an extremely advanced and sophisticated model of the world based on watching Youtube videos, but it won’t know in what ways it’s model is weirdly broken until it tests it in the real world (like, whoa, not everyone is an incessant nattering ass 😀).

Further, humans can, and have, created systems with immunity to intelligence and manipulation. This is why it’s still mostly a “who you know” world — we trust and like the people we spend time with and develop a close connection with. Importantly, we require the other person to make sacrifices of their time and resources before we trust them and do business with them. It is a mechanism that is impervious to intelligence and gaming. And it’s how the world still works, for the most part, for better or worse.

To be clear, none of this is to deny that machine intelligence will eventually surpass and render obsolete human intelligence, or to deny that the evolution of the technology we humans create will eventually displace us. But it will be a scenario that plays out over decades, centuries, or, most likely, millenia; not “hours or days”. It will require, just like the beginning of organic life, the fortuitous mix of a large number of elements in the right environment, and many, many false starts, before the machines supercede us. More likely is that by the time superintelligence happens, humans will have retreated to virtual worlds anyways. You’ve seen the movie.