Showing posts with label computer simulation. Show all posts
Showing posts with label computer simulation. Show all posts

Thursday, March 19, 2015

Zuse's Thesis: The Universe is a Computer

Konrad Zuse (1910-1995; pronounce: “Conrud Tsoosay”) not only built the first programmable computers (1935-1941) and devised the first higher-level programming language (1945), but also was the first to suggest (in 1967) that the entire universe is being computed on a computer, possibly a cellular automaton (CA). He referred to this as “Rechnender Raum” or Computing Space or Computing Cosmos. Many years later similar ideas were also published / popularized / extended by Edward Fredkin (1980s), Jürgen Schmidhuber (1990s – see overview), and more recently Stephen Wolfram (2002). Zuse’s first paper on digital physics and CA-based universes was:

Konrad Zuse, Rechnender Raum, Elektronische Datenverarbeitung, vol. 8, pages 336-344, 1967. Download PDF scan.

Zuse is careful: on page 337 he writes that at the moment we do not have full digital models of physics, but that does not prevent him from asking right there: which would be the consequences of a total discretization of all natural laws? For lack of a complete automata-theoretic description of the universe he continues by studying several simplified models. He discusses neighbouring cells that update their values based on surrounding cells, implementing the spread and creation and annihilation of elementary particles. On page 341 he writes “In all these cases we are dealing with automata types known by the name “cellular automata” in the literature” and cites von Neumann’s 1966 book: Theory of self-reproducing automata. On page 342 he briefly discusses the compatibility of relativity theory and CAs.

Contrary to a widely spread misunderstanding, quantum physics, quantum computation, Heisenberg’s uncertainty principle and Bell’s inequality do not provide any physical evidence against Zuse’s thesis of a CA-computed universe! Gerard t’ Hooft (Physics Nobel 1999) in principle agrees with determinism a la Zuse: proof by authority :-)

Continue Reading:

Thursday, March 5, 2015

Philip K. Dick on living in a computer-programmed reality, 1977

From OpenCulture



In 1963, Philip K. Dick won the coveted Hugo Award for his novel The Man in the High Castle, beating out such sci-fi luminaries as Marion Zimmer Bradley and Arthur C. Clarke. Of the novel, The Guardian writes, “Nothing in the book is as it seems. Most characters are not what they say they are, most objects are fake.” The plot—an alternate history in which the Axis Powers have won World War II—turns on a popular but contraband novel called The Grasshopper Lies Heavy. Written by the titular character, the book describes the world of an Allied victory, and—in the vein of his worlds-within-worlds thematic—Dick’s novel suggests that this book-within-a-book may in fact describe the “real” world of the novel, or one glimpsed through the novel’s reality as at least highly possible.
The Man in the High Castle may be Dick’s most straightforwardly compelling illustration of the experience of alternate realties, but it is only one among very many. In an interview Dick gave while at the high profile Metz science fiction conference in France in 1977, he said that like David Hume’s description of the “intuitive type of person,” he lived “in terms of possibilities rather than in terms of actualities.” Dick also tells a parable of an ancient, complicated, and temperamental automated record player called the “Capard,” which reverted to varying states of destructive chaos. “This Capard,” Dick says, “epitomized an inscrutable ultra-sophisticated universe which was in the habit of doing unexpected things.”

In the interview, Dick roams over so many of his personal theories about what these “unexpected things” signify that it’s difficult to keep track. However, at that same conference, he delivered a talk titled “If You Find This World Bad, You Should See Some of the Others” (in edited form above), that settles on one particular theory—that the universe is a highly-advanced computer simulation. (The talk has circulated on the internet as “Did Philip K. Dick disclose the real Matrix in 1977?”).

The subject of this speech is a topic which has been discovered recently, and which may not exist all. I may be talking about something that does not exist. Therefore I’m free to say everything and nothing. I in my stories and novels sometimes write about counterfeit worlds. Semi-real worlds as well as deranged private worlds, inhabited often by just one person…. At no time did I have a theoretical or conscious explanation for my preoccupation with these pluriform pseudo-worlds, but now I think I understand. What I was sensing was the manifold of partially actualized realities lying tangent to what evidently is the most actualized one—the one that the majority of us, by consensus gentium, agree on.

Dick goes on to describe the visionary, mystical experiences he had in 1974 after dental surgery, which he chronicled in his extensive journal entries (published in abridged form as The Exegesis of Philip K. Dick) and in works like VALIS and The Divine Invasion. As a result of his visions, Dick came to believe that “some of my fictional works were in a literal sense true,” citing in particular The Man in the High Castle and Flow My Tears, The Policeman Said, a 1974 novel about the U.S. as a police state—both novels written, he says, “based on fragmentary, residual memories of such a horrid slave state world.” He claims to remember not past lives but a “different, very different, present life.”
Finally, Dick makes his Matrix point, and makes it very clearly: “we are living in a computer-programmed reality, and the only clue we have to it is when some variable is changed, and some alteration in our reality occurs.” These alterations feel just like déjà vu, says Dick, a sensation that proves that “a variable has been changed” (by whom—note the passive voice—he does not say) and “an alternative world branched off.”

Dick, who had the capacity for a very oblique kind of humor, assures his audience several times that he is deadly serious. (The looks on many of their faces betray incredulity at the very least.) And yet, maybe Dick’s crazy hypothesis has been validated after all, and not simpy by the success of the PKD-esque The Matrix and ubiquity of Matrix analogies. For several years now, theoretical physicists and philosophers have entertained the theory that we do in fact live in a computer-generated simulation and, what’s more, that “we may even be able to detect it.”

Sunday, February 22, 2015

Are you living in a simulation? - Silas Beane (SETI Talks)



"Is the Cosmos a Vast Computer Simulation?" New Theory May Offer Clues

Professor Silas Beane, a theoretical physicist at the University of Bonn in Germany said that his group of scientists have developed a way to test the 'simulation hypothesis'--the idea that we may be living in a computer generated universe that has been debated by the greats of philosphy, from Plato to Descartes, who speculated that the world we see around us could be generated by an 'evil demon'. Plato wrote that reality may be no more than shadows in a cave but the human species, having never left the cave, may not be aware of it.

If the cosmos is a numerical simulation, there ought to be clues in the spectrum of high energy cosmic rays. Now more than two thousand years since Plato suggested that our senses provide only a weak reflection of objective reality, experts believe they have solved the riddle using mathetical models known as the lattice QCD approach in an attempt to recreate - on a theoretical level - a simulated reality. Lattice QCD is a complex approach that that looks at how particles known as quarks and gluons relate in three dimensions.

"We consider ourselves on some level universe simulators because we calculate the interactions of particles by basically replacing space and time by a grid and putting it in a box," said Beane. "In doing that we face lots of problems for instance the box and the grid size breaks Einstein's special theory of relativity so we know how to fix this in order to get physical predictions that are meaningful."

"We thought that if we make the assumption that the so-called simulators face some of the same problems that we do in terms of finite resources and so on then, if they are doing a simulation and even though their box size of course is enormous and the grid size can be very small, as long as the resources are finite then the box size will be finite, the grid size will be finite," Beane added. "And therefore at some level for instance there would be violations of Einstein's special theory of relativity."

According to MIT's Technology Review, "using the world's most powerful supercomputers, physicists have only managed to simulate tiny corners of the cosmos just a few femtometers across (A femtometer is 10^-15 metres.) That may not sound like much but the significant point is that the simulation is essentially indistinguishable from the real thing (at least as far as we understand it)."

Read more: http://www.dailygalaxy.com/my_weblog/2012/10/is-the-cosmos-a-vast-computer-simulation-new-theory-may-offer-clues.html

Tuesday, February 17, 2015

Constraints on the Universe as a Numerical Simulation

From the paper written by Silas R. Beane, Zohreh Davoudi, Martin J. Savage...



Observable consequences of the hypothesis that the observed universe is a numerical simulation performed on a cubic space-time lattice or grid are explored. The simulation scenario is first motivated by extrapolating current trends in computational resource requirements for lattice QCD into the future. Using the historical development of lattice gauge theory technology as a guide, we assume that our universe is an early numerical simulation with unimproved Wilson fermion discretization and investigate potentially-observable consequences. Among the observables that are considered are the muon g-2 and the current differences between determinations of alpha, but the most stringent bound on the inverse lattice spacing of the universe, b^(-1) >~ 10^(11) GeV, is derived from the high-energy cut off of the cosmic ray spectrum. The numerical simulation scenario could reveal itself in the distributions of the highest energy cosmic rays exhibiting a degree of rotational symmetry breaking that reflects the structure of the underlying lattice...

read the paper: http://arxiv.org/pdf/1210.1847v2.pdf

Sunday, February 15, 2015

Why Running Simulations May Mean the End is Near





By Phil Torres

People have for some time speculated about the possibility that we’re living inside a computer simulation. But the 2003 publication of Nick Bostrom’s “Are You Living In a Computer Simulation?” brought a new level of sophistication to the topic. Bostrom’s argument is that one (or more) of the following disjuncts is true: (i) our species will go extinct before reaching an advanced posthuman stage; (ii) our species will reach a posthuman stage but decide not, for whatever reasons, to run a large number of simulations; or (iii) we are almost certainly in a simulation.

Defeaters of this argument include the possibility that present trends in technological development are non-projectable into the future, and that the philosophical theory of “functionalism” is false. In the absence of these defeaters, though, the argument appears sound.

The claim that at least one of these three possibilities holds is known as the simulation argument. The simulation hypothesis, on the other hand, is the claim that the third disjunct is true. Another way to put this disjunct goes as follows: if we run large numbers of simulations in the future, we should assume that we ourselves are simulants in a simulation – that we are mere strings of 1s and 0s being manipulated by a massively powerful algorithm on a supercomputer somewhere in the universe one level above ours. Simulating universes counts as evidence for us being in one.

The reasoning is no doubt familiar to most readers. We can put it like this: imagine we’re running lots of simulations (of an “ancestral” variety) right now. Since minds are functional rather than material kinds (according to functionalism), then the beings inside these simulations are no less conscious than we are. Computers too are functional kinds, which means that there may be further simulations running on simulated computers within these simulated worlds. So the ratio of “real” to simulated minds will end up being hugely skewed towards the latter.

Now imagine that you randomly select any individual from any world, real or simulated. Upon picking a person out you ask: “Is he or she a simulant?” In virtually every case, the individual selected will be a simulant. Repeating this over and over again, you eventually happen to select yourself. You ask the same question, but how should you answer? According to a “bland” version of the indifference principle, you should answer the same way you answered in every other case: “The person selected – in this case me – is a simulant (or almost certainly so, statistically speaking).”

An interesting thing follows from this, which is only briefly explored in Bostrom’s original paper. (Others have discussed it for sure, but some of the implications appear not to be fully examined.) Imagine you keep selecting individuals, and eventually pick someone in the universe one level above ours. Is this person a simulant? Again, the most probable answer is “Yes.”

The same applies to the simulators of his or her world, and the simulators of their world as well, and so on. In other words, the central line of reasoning of Bostrom’s simulation hypothesis entails that if we run large numbers of simulations in the future, there almost certainly exists a vast hierarchy of nested simulations – universes stacked like matryoshka dolls, one enclosed within the another.

Bostrom notes that the cost of running a simulation is inherited upwards in the hierarchy, a point that counts against this “multilevel hypothesis.” But the fact is that if simulations are common in the future, it will be much more likely that any given simulator is a simulant than not.

Not only this but if each simulation spawns a few simulations of their own, there will be far more simulations at the bottom of the hierarchy than the top (where one finds Ultimate Reality). If you had to place a bet, you’d be more likely to lose if you put your money on our world being somewhere at the top rather than the bottom, with loads of simulations stacked above us.

If correct, this has significant implications for existential risks. Risks of an eschatological nature trickle downwards through the hierarchy in a cumulative manner. Many futurists have speculated about what we can do to keep our simulation from getting shut down: maybe we should fight more wars over religion to keep our simulators interested in us, or refrain from discussing the simulation hypothesis too much, lest it affect our behavior (as the Hawthorne effect predicts it will).

But what’s to keep our simulators’ simulation from being terminated? Or the simulation of their simulators? Etc. The termination of even a single simulation above ours means the termination of us: a kind of death by transitivity. And the more simulations above, the greater the riskiness of living below.

(Note: it might not even take a simulation above us getting shut down to terminate our cosmos. Maybe the civilization in a simulation five levels above ours plunges into an existential war. The building in which the computer is housed gets bombed, thus shutting down all the simulations within simulations being run on it. Or maybe our species runs large numbers of simulations in the future but then kicks the bucket in a large-scale nanotech accident. The simulations being run then get the boot.)

In sum, the simulation hypothesis doesn’t just suggest that we’re in a simulation, it suggests that there exists a vast stack of nested simulations. Both conclusions follow from the same line of reasoning. Furthermore, since the bottom of the hierarchy will tend to contain more simulations than the top (if, for example, each simulation runs a few simulations of its own the number will grow exponentially as you move down the hierarchy), it’s more likely that we’re somewhere near the bottom than the top.

This is worrisome. Being at the bottom is extremely risky, since risk is inherited downwards. More simulations above us means more opportunity for an existential catastrophe. It follows that running simulations in the future implies our existential predicament may be far more precarious than we’d otherwise think. Option (iii) implies that the outcome of (i) – extinction – may be just around the corner.

See Nick Bostrom's Simulation Argument here:

You are a Simulation & Physics Can Prove It: George Smoot at TEDxSalford




See also:  What if our reality were a computer simulation: Edeline D'Souza at TEDxYouth@Winchester