Sunday, February 15, 2015

Why Running Simulations May Mean the End is Near





By Phil Torres

People have for some time speculated about the possibility that we’re living inside a computer simulation. But the 2003 publication of Nick Bostrom’s “Are You Living In a Computer Simulation?” brought a new level of sophistication to the topic. Bostrom’s argument is that one (or more) of the following disjuncts is true: (i) our species will go extinct before reaching an advanced posthuman stage; (ii) our species will reach a posthuman stage but decide not, for whatever reasons, to run a large number of simulations; or (iii) we are almost certainly in a simulation.

Defeaters of this argument include the possibility that present trends in technological development are non-projectable into the future, and that the philosophical theory of “functionalism” is false. In the absence of these defeaters, though, the argument appears sound.

The claim that at least one of these three possibilities holds is known as the simulation argument. The simulation hypothesis, on the other hand, is the claim that the third disjunct is true. Another way to put this disjunct goes as follows: if we run large numbers of simulations in the future, we should assume that we ourselves are simulants in a simulation – that we are mere strings of 1s and 0s being manipulated by a massively powerful algorithm on a supercomputer somewhere in the universe one level above ours. Simulating universes counts as evidence for us being in one.

The reasoning is no doubt familiar to most readers. We can put it like this: imagine we’re running lots of simulations (of an “ancestral” variety) right now. Since minds are functional rather than material kinds (according to functionalism), then the beings inside these simulations are no less conscious than we are. Computers too are functional kinds, which means that there may be further simulations running on simulated computers within these simulated worlds. So the ratio of “real” to simulated minds will end up being hugely skewed towards the latter.

Now imagine that you randomly select any individual from any world, real or simulated. Upon picking a person out you ask: “Is he or she a simulant?” In virtually every case, the individual selected will be a simulant. Repeating this over and over again, you eventually happen to select yourself. You ask the same question, but how should you answer? According to a “bland” version of the indifference principle, you should answer the same way you answered in every other case: “The person selected – in this case me – is a simulant (or almost certainly so, statistically speaking).”

An interesting thing follows from this, which is only briefly explored in Bostrom’s original paper. (Others have discussed it for sure, but some of the implications appear not to be fully examined.) Imagine you keep selecting individuals, and eventually pick someone in the universe one level above ours. Is this person a simulant? Again, the most probable answer is “Yes.”

The same applies to the simulators of his or her world, and the simulators of their world as well, and so on. In other words, the central line of reasoning of Bostrom’s simulation hypothesis entails that if we run large numbers of simulations in the future, there almost certainly exists a vast hierarchy of nested simulations – universes stacked like matryoshka dolls, one enclosed within the another.

Bostrom notes that the cost of running a simulation is inherited upwards in the hierarchy, a point that counts against this “multilevel hypothesis.” But the fact is that if simulations are common in the future, it will be much more likely that any given simulator is a simulant than not.

Not only this but if each simulation spawns a few simulations of their own, there will be far more simulations at the bottom of the hierarchy than the top (where one finds Ultimate Reality). If you had to place a bet, you’d be more likely to lose if you put your money on our world being somewhere at the top rather than the bottom, with loads of simulations stacked above us.

If correct, this has significant implications for existential risks. Risks of an eschatological nature trickle downwards through the hierarchy in a cumulative manner. Many futurists have speculated about what we can do to keep our simulation from getting shut down: maybe we should fight more wars over religion to keep our simulators interested in us, or refrain from discussing the simulation hypothesis too much, lest it affect our behavior (as the Hawthorne effect predicts it will).

But what’s to keep our simulators’ simulation from being terminated? Or the simulation of their simulators? Etc. The termination of even a single simulation above ours means the termination of us: a kind of death by transitivity. And the more simulations above, the greater the riskiness of living below.

(Note: it might not even take a simulation above us getting shut down to terminate our cosmos. Maybe the civilization in a simulation five levels above ours plunges into an existential war. The building in which the computer is housed gets bombed, thus shutting down all the simulations within simulations being run on it. Or maybe our species runs large numbers of simulations in the future but then kicks the bucket in a large-scale nanotech accident. The simulations being run then get the boot.)

In sum, the simulation hypothesis doesn’t just suggest that we’re in a simulation, it suggests that there exists a vast stack of nested simulations. Both conclusions follow from the same line of reasoning. Furthermore, since the bottom of the hierarchy will tend to contain more simulations than the top (if, for example, each simulation runs a few simulations of its own the number will grow exponentially as you move down the hierarchy), it’s more likely that we’re somewhere near the bottom than the top.

This is worrisome. Being at the bottom is extremely risky, since risk is inherited downwards. More simulations above us means more opportunity for an existential catastrophe. It follows that running simulations in the future implies our existential predicament may be far more precarious than we’d otherwise think. Option (iii) implies that the outcome of (i) – extinction – may be just around the corner.

See Nick Bostrom's Simulation Argument here:

No comments:

Post a Comment