via motherboard:
Last June, Elon Musk claimed that we almost certainly live in a simulated universe. In his words, “There’s a one in billions chance [we’re in] base reality. I think it’s one in billions.” He then added that, “We should hope that’s true because otherwise if civilization stops advancing, that could be due to some calamitous event that erases civilization, so maybe we should be hopeful this is a simulation. Otherwise, we will create simulations that are indistinguishable from reality or civilization will cease to exist. Those are the two options.”
The first thing to notice here is that these aren’t the only two options—Musk overlooks a another possibility that we’ll discuss below. And second, Musk is wrong that we should hope we’re in a computer simulation. If we are simulated beings, or sims, living inside a high-resolution simulated reality, then (as I’ve suggested before) we have reasons to expect the probability of doom to be extremely high. Call this the “Simulation Doomsday Hypothesis.”
Consider the original “simulation argument” proposed by the Oxford philosopher Nick Bostrom. It states that there are three—and only three—possible future scenarios that our species could find itself in. First, we could go extinct before reaching a “posthuman” state, or a state in which we become advanced cyborgs so different from contemporary Homo sapiens that we could describe ourselves as a new species: Posthomo cyborgus.
As it happens, the probability of catastrophe is far higher than most people realize. The co-founder of the Centre for the Study of Existential Risk at Cambridge University, Sir Martin Rees, suggests that civilization has a fifty-fifty chance of making it through the twenty-first century. This means that the average American is almost 50 times more likely to witness the collapse of civilization than to die in a motor vehicle accident.
The second possibility is what Musk overlooks, and it’s what I believe is by far the most preferable outcome: we advance to a “posthuman” state but decide not to run simulations in which beings similar to us live. Perhaps we decide that doing this would be unethical, given the sadness and sorrow that pervades our world.
Indeed, if we are in a simulation, then we could immediately infer something about our simulators, namely that they aren’t omnibenevolent—or perfectly moral—beings, not by a long shot. This reasoning echoes the “argument from evil,” which points out that moral facts about our world are incompatible with the existence of an omnibenevolent God, therefore God doesn’t exist. So, it could be that we develop into posthumans, and it could even be that we run a large number of simulations in the future, but we refrain from running simulations in which creatures like us live in a world like ours.
And finally, the third possibility is that we almost certainly exist inside a simulation being run on a supercomputer in some other universe. You can think about it like this: if the first two options are false, then it necessarily follows that we become a posthuman civilization that runs a bunch of “ancestor simulations,” as Bostrom calls them. (If we don’t die out or decide not to run simulations, then we will survive and run simulations. That’s just logic.) By “a bunch” I mean millions, billions, or perhaps a googolplex of simulated universes each crowded with sims completely oblivious of their existence as 1s and 0s in a computer program.
Such sims would be oblivious of this fact because future simulations could be run with sufficient detail to fool even the most observant individual. The sims living inside them would also have conscious experiences just like we do, that is, if our best theory about the nature of mental states, called functionalism, is correct. According to the concept of functionalism, all that’s required for consciousness to arise from a system is for that system to exhibit the right sort of “functional organization.” It follows that whether the system is made of squishy brain stuff or silicon-based hardware is (excuse the pun) completely immaterial.
Minds are defined not by what they are, but by what they do. If we make silicon hardware do the same thing that our brains do, then consciousness will emerge.
This being said, consider the implications of the third possibility. If we simulate billions upon billions (upon billions) of sims in the future, and if we have no way of distinguishing our experiences from theirs, then how sure can we be that we’re not ourselves simulated? In other words, imagine that you could take a “sideways-on” view of every extant universe, whether simulated or not. You then reach into a random universe and choose a random individual. What’s the likelihood that you select a sim? Well, the more simulations that are being run, the more likely the answer will be, “She or he is a sim.”
Imagine that you do this over and over again, randomly plucking individuals from their pockets of reality, and for statistical reasons you almost always win your bet that the selected person is a sim. But then something freaky happens: you reach in and pull out a person who happens to be you. How should you answer the question posed above? Are you more likely to be a sim or one of the very few “real” individuals at the top level of Ultimate Reality?
Well, insofar as you are a typical observer in your universe, you should answer no differently than before: “I am almost certainly a sim.” If you were placing bets with a friend, you’d be far more likely to win a few bucks with this answer than by asserting the much less probable claim that you’re non-simulated. It’s this basic line of reasoning that leads Musk to claim that, “There’s a one in billions chance we’re in base reality.”
But this isn’t the end of the story. Consider the fact that the existence of a sim requires the existence of a simulator. This is an indisputable “genealogical” fact. It follows that if you, me, and Neil deGrasse Tyson (my favorite astrophysicist) are all sims in a simulation, there necessarily exists one or more simulators one level “above” us. Admittedly, it’s odd to think that we may be the involuntary participants in a strange sort of voyeurism, with our simulators looking down on us like the gods of ancient myths and religions, aware of even our most private moments.
But perhaps they should be concerned about the very same invasion of privacy. Why? Well, imagine that after randomly selecting yourself from a random universe, you reach in once more and choose an individual who happens to be—wait for it—one of our simulators. Again, the exact same logic applies: our simulator is much more likely to be a sim than a non-sim. Since sims require simulators, this implies another simulation level above our simulator, at which point the questioning can start over: are these simulators two levels “up” from us more likely to live in a simulation or be one of the few non-simulated beings? For statistical reasons, they’re far more likely to be sims, which implies yet another simulation level above them. And so on.
The result of this line of reasoning, which lies at the heart of Bostrom’s argument, is a vastly tall stack of nested simulations, each embedded in another like Matryoshka dolls. We can call this a simulation hierarchy.
In other words, if Bostrom’s first two options above (i.e., extinction and running no simulations) are false, then human civilization will mature into a posthuman civilization that runs lots of simulations with creatures like us. And if we run lots of simulations with creatures like us, then exactly two things follow: (a) we will almost certainly live in a simulation, and (b) there will almost certainly exist a huge simulation hierarchy.
(As Bostrom himself puts it, if we run lots of simulations in the future, then “we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.”)
Is this the end of the story? Not quite. There are further questions to be asked and answered. For example, we might wonder where in the simulation hierarchy we live, if indeed we are sims. This may sound like an impossible question, but it’s not. Consider the fact that a simulation could produce many lower-level simulations, but it could not have been produced by more than one higher-level simulation. In other words, there’s a “genealogical asymmetry,” so to speak, between simulation levels in a hierarchy: each simulation could spawn any number of additional simulations below it, and this proliferation of new simulations can only proceed in one direction.
The result is an “inverted tree” shape to the simulation hierarchy, with Ultimate Reality at the very top. This suggests that we are statistically unlikely to find ourselves near the level of Ultimate Reality, since far more simulations will have accumulated at the bottom. As the theoretical physicist Sean Carroll puts it in a recent article about the simulation hypothesis, “We probably live in the lowest-level simulation... [because] that’s where the vast majority of observers are to be found.”
This is where things get really interesting, and scary. Think about the existential implications of living in a simulation: if we inhabit a simulated universe, then it could get shut down at any moment, without warning. This introduces a completely novel existential risk scenario to the list of things we have to worry about, including asteroid and comet impacts, supervolcanic eruptions, nuclear war, and global pandemics.
Even more, if our simulators live in a simulated universe, then it could get shut down too, thereby causing the termination of our universe two levels below. This introduces yet another existential risk scenario to the list: we get shut down because our simulators get shut down. Continuing with this line of reasoning, if the simulators of our simulators live in a simulated universe, then they too could get shut down—and so on.
The key idea here is that annihilation is inherited downwards in simulation hierarchies, and the more simulations there are above us, the more ways there will be for our simulation to suddenly vanish into the digital oblivion. Consider a simulation hierarchy that consists of 10 levels. Let's say that on each level, there are 10 scenarios that could lead to all lower-level simulations being shut down. For example, perhaps a posthuman civilization running simulations self-destructs in a nuclear conflagration, or a lab assistant accidentally spills coffee on computer hardware, thereby causing it to malfunction. The possibilities are interminable—and could also be quite exotic, since other simulated universes could have different physical constants and laws of nature.
Mathematically, this would yield 90 distinct ways that a simulation on Level 10 could be terminated. If the simulation hierarchy were to include 1,000 levels, then there would be a staggering 9,990 ways for bottom-level simulations to get shut down. And if the simulation hierarchy were vastly tall—as we established above—then the probability of doom could be virtually certain for those at the bottom. All it would take is a single simulation above our to get shut down and poof!—not even a trace of our civilization would remain.
This is why we should hope that we’re not living in a simulation. The third option that Bostrom outlines—becoming a posthuman civilization that runs lots of simulations—essentially implies his first option: extinction. Musk is therefore incorrect in his sanguine declaration that living in a simulation would be a good thing. The very same reasoning that leads Musk (following Bostrom) to conclude that we’re in a simulation also leads us straight into the jaws of the Simulation Doomsday Hypothesis. If we’re lucky, our Posthomo cyborgus descendants will decide their resources are better spent doing something other than digitally simulating their apeish ancestors.
No comments:
Post a Comment