By Veronique Greenwood
Julian Jaynes was living out of a couple of suitcases in a Princeton dorm in the early 1970s. He must have been an odd sight there among the undergraduates, some of whom knew him as a lecturer who taught psychology, holding forth in a deep baritone voice. He was in his early 50s, a fairly heavy drinker, untenured, and apparently uninterested in tenure. His position was marginal. “I don’t think the university was paying him on a regular basis,” recalls Roy Baumeister, then a student at Princeton and today a professor of psychology at Florida State University. But among the youthful inhabitants of the dorm, Jaynes was working on his masterpiece, and had been for years.
From the age of 6, Jaynes had been transfixed by the singularity of conscious experience. Gazing at a yellow forsythia flower, he’d wondered how he could be sure that others saw the same yellow as he did. As a young man, serving three years in a Pennsylvania prison for declining to support the war effort, he watched a worm in the grass of the prison yard one spring, wondering what separated the unthinking earth from the worm and the worm from himself. It was the kind of question that dogged him for the rest of his life, and the book he was working on would grip a generation beginning to ask themselves similar questions.
The Origin of Consciousness in the Breakdown of the Bicameral Mind, when it finally came out in 1976, did not look like a best-seller. But sell it did. It was reviewed in science magazines and psychology journals, Time, The New York Times, and the Los Angeles Times. It was nominated for a National Book Award in 1978. New editions continued to come out, as Jaynes went on the lecture circuit. Jaynes died of a stroke in 1997; his book lived on. In 2000, another new edition hit the shelves. It continues to sell today.
In the beginning of the book, Jaynes asks, “This consciousness that is myself of selves, that is everything, and yet nothing at all—what is it? And where did it come from? And why?” Jaynes answers by unfurling a version of history in which humans were not fully conscious until about 3,000 years ago, instead relying on a two-part, or bicameral, mind, with one half speaking to the other in the voice of the gods with guidance whenever a difficult situation presented itself. The bicameral mind eventually collapsed as human societies became more complex, and our forebears awoke with modern self-awareness, complete with an internal narrative, which Jaynes believes has its roots in language.
It’s a remarkable thesis that doesn’t fit well with contemporary thought about how consciousness works. The idea that the ancient Greeks were not self-aware raises quite a few eyebrows. By giving consciousness a cultural origin, says Christof Koch, chief scientific officer at the Allen Institute for Brain Science, “Jaynes disavows consciousness as a biological phenomenon.”
But Koch and other neuroscientists and philosophers admit Jaynes’ wild book has a power all its own. “He was an old-fashioned amateur scholar of considerable depth and tremendous ambition, who followed where his curiosity led him,” says philosopher Daniel Dennett. The kind of search that Jaynes was on—a quest to describe and account for an inner voice, an inner world we seem to inhabit—continues to resonate. The study of consciousness is on the rise in neuroscience labs around the world, but the science isn’t yet close to capturing subjective experience. That’s something Jaynes did beautifully, opening a door on what it feels like to be alive, and be aware of it.
Jaynes was the son of a Unitarian minister in West Newton, Massachusetts. Though his father died when Jaynes was 2 years old, his voice lived on in 48 volumes of his sermons, which Jaynes seems to have spent a great deal of time with as he grew up. In college, he experimented with philosophy and literature but decided that psychology, with its pursuit of real data about the physical world, was where he should seek answers to his questions. He headed to graduate school in 1941, but shortly thereafter, the United States joined World War II. Jaynes, a conscientious objector, was assigned to a civilian war effort camp. He soon wrote a letter to the U.S. Attorney General announcing that he was leaving, finding the camp’s goal incompatible with his principles: “Can we work within the logic of an evil system for its destruction? Jesus did not think so ... Nor do I.” He was sent to prison, where he had plenty of time to reflect on the problem of consciousness. “Jaynes was a man of principle, some might say impulsively or recklessly so,” a former student and a neighbor recalled. “He seemed to draw energy from jousting windmills.”
Jaynes emerged after three years, convinced that animal experiments could help him understand how consciousness first evolved, and spent the next three years in graduate school at Yale University. For a while, he believed that if a creature could learn from experience, it was having an experience, implying consciousness. He herded single paramecia through a maze carved in wax on Bakelite, shocking them if they turned the wrong way. “I moved on to species with synaptic nervous systems, flatworms, earthworms, fish, and reptiles, which could indeed learn, all on the naive assumption that I was chronicling the grand evolution of consciousness,” he recounts in his book. “Ridiculous! It was, I fear, several years before I realized that this assumption makes no sense at all.” Many creatures could be trained, but what they did was not introspection. And that was what tormented Jaynes.
Meanwhile, he performed more traditional research on the maternal behavior of animals under his advisor, Frank Beach. It was a difficult time to be interested in consciousness. One of the dominant psychological theories was behaviorism, which explored the external responses of humans and animals to stimuli. Conditioning with electric shocks was in, pondering the intangible world of thoughts was out, and for understandable reasons—behaviorism was a reaction to earlier, less rigorous trends in psychology. But for much of Jaynes’ career, inner experience was beyond the pale. In some parts of this community to say you studied consciousness was to confess an interest in the occult.
In 1949, Jaynes left without receiving his Ph.D., apparently having refused to submit his dissertation. It’s not clear exactly why—some suggest his committee wanted revisions he would not make, some that he was irked by the hierarchical structure of academia, some that he simply was fed up enough to walk. One story he told was that he didn’t want to pay the $25 submission fee. (In 1977, as his book was selling, Jaynes completed his Ph.D. at Yale.) But it does seem clear that he was frustrated by his lack of progress. He later wrote that a psychology based on rats in mazes rather than the human mind was “bad poetry disguised as science.”
It was the beginning of an odd peregrination. In the fall of 1949, he moved to England and became a playwright and actor, and for the next 15 years, he ricocheted back and forth across the ocean, alternating between plays and adjunct teaching, eventually landing at Princeton University in 1964. All the while, he had been reading widely and pondering the question of what consciousness was and how it could have arisen. By 1969, he was thinking about a work that would describe the origin of consciousness as a fundamentally cultural change, rather than the evolved one he had searched for. It was to be a grand synthesis of science, archaeology, anthropology, and literature, drawing on material gathered during the past couple decades of his life. He believed he’d finally heard something snap into place.
The book sets its sights high from the very first words. “O, what a world of unseen visions and heard silences, this insubstantial country of the mind!” Jaynes begins. “A secret theater of speechless monologue and prevenient counsel, an invisible mansion of all moods, musings, and mysteries, an infinite resort of disappointments and discoveries.”
To explore the origins of this inner country, Jaynes first presents a masterful precis of what consciousness is not. It is not an innate property of matter. It is not merely the process of learning. It is not, strangely enough, required for a number of rather complex processes. Conscious focus is required to learn to put together puzzles or execute a tennis serve or even play the piano. But after a skill is mastered, it recedes below the horizon into the fuzzy world of the unconscious. Thinking about it makes it harder to do. As Jaynes saw it, a great deal of what is happening to you right now does not seem to be part of your consciousness until your attention is drawn to it. Could you feel the chair pressing against your back a moment ago? Or do you only feel it now, now that you have asked yourself that question?
Consciousness, Jaynes tells readers, in a passage that can be seen as a challenge to future students of philosophy and cognitive science, “is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of.” His illustration of his point is quite wonderful. “It is like asking a flashlight in a dark room to search around for something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.”
Perhaps most striking to Jaynes, though, is that knowledge and even creative epiphanies appear to us without our control. You can tell which water glass is the heavier of a pair without any conscious thought—you just know, once you pick them up. And in the case of problem-solving, creative or otherwise, we give our minds the information we need to work through, but we are helpless to force an answer. Instead it comes to us later, in the shower or on a walk. Jaynes told a neighbor that his theory finally gelled while he was watching ice moving on the St. John River. Something that we are not aware of does the work.
The picture Jaynes paints is that consciousness is only a very thin rime of ice atop a sea of habit, instinct, or some other process that is capable of taking care of much more than we tend to give it credit for. “If our reasonings have been correct,” he writes, “it is perfectly possible that there could have existed a race of men who spoke, judged, reasoned, solved problems, indeed did most of the things that we do, but were not conscious at all.”
Jaynes believes that language needed to exist before what he has defined as consciousness was possible. So he decides to read early texts, including The Iliad and The Odyssey, to look for signs of people who aren’t capable of introspection—people who are all sea, no rime. And he believes he sees that in The Iliad. He writes that the characters in The Iliad do not look inward, and they take no independent initiative. They only do what is suggested by the gods. When something needs to happen, a god appears and speaks. Without these voices, the heroes would stand frozen on the beaches of Troy, like puppets.
Speech was already known to be localized in the left hemisphere, instead of spread out over both hemispheres. Jaynes suggests that the right hemisphere’s lack of language capacity is because it used to be used for something else—specifically, it was the source of admonitory messages funneled to the speech centers on the left side of the brain. These manifested themselves as hallucinations that helped guide humans through situations that required complex responses—decisions of statecraft, for instance, or whether to go on a risky journey.
The combination of instinct and voices—that is, the bicameral mind—would have allowed humans to manage for quite some time, as long as their societies were rigidly hierarchical, Jaynes writes. But about 3,000 years ago, stress from overpopulation, natural disasters, and wars overwhelmed the voices’ rather limited capabilities. At that point, in the breakdown of the bicameral mind, bits and pieces of the conscious mind would have come to awareness, as the voices mostly died away. That led to a more flexible, though more existentially daunting, way of coping with the decisions of everyday life—one better suited to the chaos that ensued when the gods went silent. By The Odyssey, the characters are capable of something like interior thought, he says. The modern mind, with its internal narrative and longing for direction from a higher power, appear.
The rest of the book—400 pages—provides what Jaynes sees as evidence of this bicamerality and its breakdown around the world, in the Old Testament, Maya stone carvings, Sumerian writings. He cites a carving of an Assyrian king kneeling before a god’s empty throne, circa 1230 B.C. Frequent, successive migrations around the same time in what is now Greece, he takes to be a tumult caused by the breakdown. And Jaynes reflects on how this transition might be reverberating today. “We, at the end of the second millennium A.D., are still in a sense deep in this transition to a new mentality. And all about us lie the remnants of our recent bicameral past,” he writes, in awe of the reach of this idea, and seized with the pathos of the situation. “Our kings, presidents, judges, and officers begin their tenures with oaths to the now-silent deities, taken upon the writings of those who have last heard them.”
It’s a sweeping and profoundly odd book. But The Origin of Consciousness in the Breakdown of the Bicameral Mind was enormously appealing. Part of it might have been that many readers had never thought about just what consciousness was before. Perhaps this was the first time many people reached out, touched their certainty of self, and found it was not what they expected. Jaynes’ book did strike in a particular era when such jolts were perhaps uniquely potent. In the 1970s, many people were growing interested in questions of consciousness. Baumeister, who admires Jaynes, and read the book in galley form before it was published, says Jaynes tapped into the “spiritual stage” of the ascendant New Age movement.
And the language—what language! It has a Nabokovian richness. There is an elegance, power, and believability to his prose. It sounds prophetic. It feels true. And that has incredible weight. Truth and beauty intertwine in ways humans have trouble picking apart. Physicist Ben Lillie, who runs the Storycollider storytelling series, remembers when he discovered Jaynes’ book. “I was part of this group that hung out in the newspaper and yearbook offices and talked about intellectual stuff and wore a lot of black,” Lillie says. “Somebody read it. I don’t remember who was first, it wasn’t me. All of a sudden we thought, that sounds great, and we were all reading it. You got to feel like a rebel because it was going against common wisdom.”
It’s easy to find cracks in the logic: Just for starters, there are moments in The Iliad when the characters introspect, though Jaynes decides they are later additions or mistranslations. But those cracks don’t necessarily diminish the book’s power. To readers like Paul Hains, the co-founder of Aeon, an online science and philosophy magazine, Jaynes’ central thesis is of secondary importance to the book’s appeal. “What captured me was his approach and style and the inspired and nostalgic mood of the text; not so much the specifics of his argument, intriguing though they were,” Hains writes. “Jaynes was prepared to explore the frontier of consciousness on its own terms, without explaining away its mysterious qualities.”
Meanwhile, over the last four decades, the winds have shifted, as often happens in science as researchers pursue the best questions to ask. Enormous projects, like those of the Allen Institute for Brain Science and the Brain-Mind Institute of the Swiss Federal Institute of Technology, seek to understand the structure and function of the brain in order to answer many questions, including what consciousness is in the brain and how it is generated, right down to the neurons. A whole field, behavioral economics, has sprung up to describe and use the ways in which we are unconscious of what we do—a major theme in Jaynes’ writing—and the insights netted its founders, Daniel Kahneman and Vernon L. Smith, the Nobel Prize.
Eric Schwitzgebel, a professor of philosophy at University of California, Riverside, has conducted experiments to investigate how aware we are of things we are not focused on, which echo Jaynes’ view that consciousness is essentially awareness. “It’s not unreasonable to have a view that the only things you’re conscious of are things you are attending to right now,” Schwitzgebel says. “But it’s also reasonable to say that there’s a lot going on in the background and periphery. Behind the focus, you’re having all this experience.” Schwitzgebel says the questions that drove Jaynes are indeed hot topics in psychology and neuroscience. But at the same time, Jaynes’ book remains on the scientific fringe. “It would still be pretty far outside of the mainstream to say that ancient Greeks didn’t have consciousness,” he says.
Dennett, who has called The Origin of Consciousness in the Breakdown of the Bicameral Mind a “marvelous, wacky book,” likes to give Jaynes the benefit of the doubt. “There were a lot of really good ideas lurking among the completely wild junk,” he says. Particularly, he thinks Jaynes’ insistence on a difference between what goes on in the minds of animals and the minds of humans, and the idea that the difference has its origins in language, is deeply compelling.
“[This] is a view I was on the edge of myself, and Julian kind of pushed me over the top,” Dennett says. “There is such a difference between the consciousness of a chimpanzee and human consciousness that it requires a special explanation, an explanation that heavily invokes the human distinction of natural language,” though that’s far from all of it, he notes. “It’s an eccentric position,” he admits wryly. “I have not managed to sway the mainstream over to this.”
It’s a credit to Jaynes’ wild ideas that, every now and then, they are mentioned by neuroscientists who study consciousness. In his 2010 book, Self Comes to Mind, Antonio Damasio, a professor of neuroscience, and the director of the Brain and Creativity Institute at the University of Southern California, sympathizes with Jaynes’ idea that something happened in the human mind in the relatively recent past. “As knowledge accumulated about humans and about the universe, continued reflection could well have altered the structure of the autobiographical self and led to a closer stitching together of relatively disparate aspects of mind processing; coordination of brain activity, driven first by value and then by reason, was working to our advantage,” he writes. But that’s a relatively rare endorsement. A more common response is the one given by neurophilosopher Patricia S. Churchland, an emerita professor at the University of California, San Diego. “It is fanciful,” she says of Jaynes’ book. “I don’t think that it added anything of substance to our understanding of the nature of consciousness and how consciousness emerges from brain activity.”
Jaynes himself saw his theory as a scientific contribution, and was disappointed with the research community’s response. Although he enjoyed the public’s interest in his work, tilting at these particular windmills was frustrating even for an inveterate contrarian. Jaynes’ drinking grew heavier. A second book, which was to have taken the ideas further, was never completed.
And so, his legacy, odd as it is, lives on. Over the years, Dennett has sometimes mentioned in his talks that he thought Jaynes was on to something. Afterward—after the crowd had cleared out, after the public discussion was over—almost every time there would be someone hanging back. “I can come out of the closet now,” he or she would say. “I think Jaynes is wonderful too.”
Marcel Kuijsten is an IT professional who runs a group called the Julian Jaynes Society whose membership he estimates at about 500 or 600 enthusiasts from around the world. The group has an online members’ forum where they discuss Jaynes’ theory, and in 2013 for the first time they hosted a conference, meeting in West Virginia for two days of talks. “It was an incredible experience,” he says.
Kuijsten feels that many people who come down on Jaynes haven’t gone to the trouble to understand the argument, which he admits is hard to get one’s mind around. “They come into it with a really ingrained, pre-conceived notion of what consciousness means to them,” he says, “And maybe they just read the back of the book.” But he’s playing the long game. “I’m not here to change anybody’s mind. It’s a total waste of time. I want to provide the best quality information, and provide good resources for people who’ve read the book and want to have a discussion.”
To that end, Kuijsten and the Society have released books of Jaynes’ writings and of new essays about him and his work. Whenever discoveries that relate to the issues Jaynes raised are published, Kuijsten notes them on the site. In 2009 he highlighted brain-imaging studies suggesting that auditory hallucinations begin with activity in the right side of the brain, followed by activation on the left, which sounds similar to Jaynes’ mechanism for the bicameral mind. He hopes that as time goes on, people will revisit some of Jaynes’ ideas in light of new science.
Ultimately, the broader questions that Jaynes’ book raised are the same ones that continue to vex neuroscientists and lay people. When and why did we start having this internal narrative? How much of our day-to-day experience occurs unconsciously? What is the line between a conscious and unconscious process? These questions are still open. Perhaps Jaynes’ strange hypotheses will never play a role in answering them. But many people—readers, scientists, and philosophers alike—are grateful he tried.
Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts
Wednesday, November 15, 2017
Consciousness Began When the Gods Stopped Speaking
Labels:
bicameral,
consciousness,
julian jaynes,
mind,
neuroscience,
origin
Tuesday, October 31, 2017
Scientists Have an Experiment to See If the Human Mind Is Bound to the Physical World
via Futurism
Perhaps one of the most intriguing and interesting phenomena in quantum physics is what Einstein referred to as a “spooky action at a distance” — also known as quantum entanglement. This quantum effect is behind what makes quantum computers work, as quantum bits (qubits) generally rely on entanglement to process data and information. It’s also the working theory behind the possibility of quantum teleportation.
The long and short of it is this: entangled particles affect one another regardless of distance, where a measurement of the state of one would instantly influence the state of the other. However, it remains “spooky” because — despite following the laws of quantum physics — entanglement seems to reveal some deeper theory that’s yet to be discovered. A number of physicists have been working on determining this deeper theory, but so far nothing definitive has come out.
As for entanglement itself, a very famous test was developed by physicist John Bell in 1964 to determine whether particles do, in fact, influence one another in this way. Simply put, the Bell test involves a pair of entangled particles: one is sent towards location A and the other to location B. At each of these points, a device measures the state of the particles. The settings in the measuring devices are set at random, so that it’s impossible for A to know the setting of B (and vice versa) at the time of measurement. Historically, the Bell test has supported the spooky theory.
Now, Lucien Hardy, a theoretical physicist from the Perimeter Institute in Canada, is suggesting that the measurements between A and B could be controlled by something that may potentially be separate from the material world: the human mind. His idea is derived from what French philosopher and mathematician Rene Descartes called the mind-matter duality, “[where] the mind is outside of regular physics and intervenes on the physical world,” as Hardy explained.
To do this, Hardy proposed a version of the Bell test involving 100 humans, each hooked up to EEG headsets that would read their brain activity. These devices would be used to switch the settings on the measuring devices for A and B, set at 100 kilometers apart. “The radical possibility we wish to investigate is that, when humans are used to decide the settings (rather than various types of random number generators), we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality,” Hardy wrote in a paper published online earlier this month.
If the correlation between the measurements don’t match previous Bell tests, then there could be a violation of quantum theory that suggests A and B are being controlled by factors outside the realm of standard physics. “[If] you only saw a violation of quantum theory when you had systems that might be regarded as conscious, humans or other animals, that would certainly be exciting. I can’t imagine a more striking experimental result in physics than that,” Hardy said. “We’d want to debate as to what that meant.”
What it could mean is this: that the human mind (consciousness) isn’t made up of the same matter governed by physics. Furthermore, it could suggest that the mind is capable of overcoming physics with free will. This could potentially be the first time scientists gain a firm grasp on the problem of consciousness. “It wouldn’t settle the question, but it would certainly have a strong bearing on the issue of free will,” said Hardy.
Perhaps one of the most intriguing and interesting phenomena in quantum physics is what Einstein referred to as a “spooky action at a distance” — also known as quantum entanglement. This quantum effect is behind what makes quantum computers work, as quantum bits (qubits) generally rely on entanglement to process data and information. It’s also the working theory behind the possibility of quantum teleportation.
The long and short of it is this: entangled particles affect one another regardless of distance, where a measurement of the state of one would instantly influence the state of the other. However, it remains “spooky” because — despite following the laws of quantum physics — entanglement seems to reveal some deeper theory that’s yet to be discovered. A number of physicists have been working on determining this deeper theory, but so far nothing definitive has come out.
As for entanglement itself, a very famous test was developed by physicist John Bell in 1964 to determine whether particles do, in fact, influence one another in this way. Simply put, the Bell test involves a pair of entangled particles: one is sent towards location A and the other to location B. At each of these points, a device measures the state of the particles. The settings in the measuring devices are set at random, so that it’s impossible for A to know the setting of B (and vice versa) at the time of measurement. Historically, the Bell test has supported the spooky theory.
Now, Lucien Hardy, a theoretical physicist from the Perimeter Institute in Canada, is suggesting that the measurements between A and B could be controlled by something that may potentially be separate from the material world: the human mind. His idea is derived from what French philosopher and mathematician Rene Descartes called the mind-matter duality, “[where] the mind is outside of regular physics and intervenes on the physical world,” as Hardy explained.
To do this, Hardy proposed a version of the Bell test involving 100 humans, each hooked up to EEG headsets that would read their brain activity. These devices would be used to switch the settings on the measuring devices for A and B, set at 100 kilometers apart. “The radical possibility we wish to investigate is that, when humans are used to decide the settings (rather than various types of random number generators), we might then expect to see a violation of Quantum Theory in agreement with the relevant Bell inequality,” Hardy wrote in a paper published online earlier this month.
If the correlation between the measurements don’t match previous Bell tests, then there could be a violation of quantum theory that suggests A and B are being controlled by factors outside the realm of standard physics. “[If] you only saw a violation of quantum theory when you had systems that might be regarded as conscious, humans or other animals, that would certainly be exciting. I can’t imagine a more striking experimental result in physics than that,” Hardy said. “We’d want to debate as to what that meant.”
What it could mean is this: that the human mind (consciousness) isn’t made up of the same matter governed by physics. Furthermore, it could suggest that the mind is capable of overcoming physics with free will. This could potentially be the first time scientists gain a firm grasp on the problem of consciousness. “It wouldn’t settle the question, but it would certainly have a strong bearing on the issue of free will,” said Hardy.
Labels:
consciousness,
free will,
quantum entanglement,
spooky action
More Thinking Allowed
2 Episodes to consider:
1. Non-Duality with Russell Targ
Russell Targ, a laser physicist, cofounded the remote viewing research program at SRI International. He is coauthor Mind Reach, The Mind Race, Miracles of Mind, The Heart of the Mind, and The End of Suffering. He is author of Limitless Mind and The Reality of ESP: A Physicist's Proof of Psychic Abilities. He is also coeditor of the anthology, Mind At Large. Here he argues that, according to the Buddhist logic of Nagarjuna, Aristotle is wrong. A proposition can be both true and false at the same time. Most propositions are neither true nor false. He finds similar thinking in the wave-particle duality of modern physics, as well as in Kurt Gödel's Incompleteness Theorem. The unity of consciousness is also expressed in the findings of remote viewing. He describes the path of Dzogchen in Buddhism and claims that it is the most direct path to enlightenment.
2. Terminal Lucidity with Stafford Betty
Stafford Betty, PhD, is professor of religious studies at California State University at Bakersfield. He is author of Heaven and Hell Unveiled, The Afterlife Unveiled, and When Did You Ever Become Less by Dying? Here he describes an unusual situation that occurs in a small percentage of brain-damaged patients as they approach death. They are able to achieve a state of high lucidity, as if their psyche is able to function independently of their dysfunctional body. He provides several striking examples. This is a condition that has been reported in the literature for over a century, but has only recently been identified as a distinct syndrome.
1. Non-Duality with Russell Targ
Russell Targ, a laser physicist, cofounded the remote viewing research program at SRI International. He is coauthor Mind Reach, The Mind Race, Miracles of Mind, The Heart of the Mind, and The End of Suffering. He is author of Limitless Mind and The Reality of ESP: A Physicist's Proof of Psychic Abilities. He is also coeditor of the anthology, Mind At Large. Here he argues that, according to the Buddhist logic of Nagarjuna, Aristotle is wrong. A proposition can be both true and false at the same time. Most propositions are neither true nor false. He finds similar thinking in the wave-particle duality of modern physics, as well as in Kurt Gödel's Incompleteness Theorem. The unity of consciousness is also expressed in the findings of remote viewing. He describes the path of Dzogchen in Buddhism and claims that it is the most direct path to enlightenment.
2. Terminal Lucidity with Stafford Betty
Stafford Betty, PhD, is professor of religious studies at California State University at Bakersfield. He is author of Heaven and Hell Unveiled, The Afterlife Unveiled, and When Did You Ever Become Less by Dying? Here he describes an unusual situation that occurs in a small percentage of brain-damaged patients as they approach death. They are able to achieve a state of high lucidity, as if their psyche is able to function independently of their dysfunctional body. He provides several striking examples. This is a condition that has been reported in the literature for over a century, but has only recently been identified as a distinct syndrome.
Sunday, October 8, 2017
The US Army Funded Astral Projection and Hypnosis Research in the 80s
via motherboard
Human consciousness is nothing but an intersection of energy planes that forms a hologram able to travel through spacetime—across the universe, and into the past, present, and future.
I read about this idea in a CIA document about the US Army. Yes, the US Army. The institution that painstakingly crafts an image of commitment to pragmatic and logical objectives. When I was reading through the documents, I was certainly a bit surprised.
According to the declassified CIA documents that I read, the US Army was extremely interested in psychic experimentation. From the late 1970s into the 80s, it even paid for intelligence officers to go on weeklong excursions to an out-of-the-way institute specializing in out-of-body experiences and astral projection.
The documents were declassified as early as 2001, but they caught my eye when they appeared in a /r/conspiracy post earlier this month. The psychic experimentation program, which was called "Project Center Lane," interviewed Army intelligence officers in order "to determine attitudes about the possible use of psychoenergetic phenomena in the intelligence field," according to the declassified CIA document from 1984.
As a huge fan of The X-Files, I couldn't resist reading as much as I could about Project Center Lane, which looks like it could have appeared on the show.
In June 1983, Army Commander Wayne M. McDonnell was asked to give his commander an assessment of the psychic services provided by the Monroe Institute, a non-profit organization focused on treatments designed to expand a person's consciousness. The Monroe Institute is known for its patented "Hemi-Sync" technology, which uses audio to synchronize the brainwaves on the left and right sides of the brain. According to the organization's website, this makes the brain vulnerable to hypnosis. McDonnell himself had completed the seven-day psychic program the month prior at the institute, which is lodged in the middle of Virginia's Blue Ridge Mountains in a town called Faber, about 30 miles east of Charlottesville.
McDonnell's assessment, collected from his experience at the secluded institute, formed the basis of a 29-page Army document that featured detailed explanations of hypnosis, holograms, and out-of-body experiences. The document placed these phenomena in the context of larger ideas of consciousness, energy, space-time, quantum subatomic particles, and so-called astral projection, a practice that aims to transport consciousness around a metaphysical plane—a central idea in McDonnell's assessment.
McDonnell cited a metaphor penned by Monroe Institute employee Melissa Jager in order to illustrate the nature of hypnosis. The metaphor says that a normal state of consciousness is like a lamp, which emits light in a "chaotic, incoherent way." However, a hypnotized state of consciousness is said to be like a laser beam, whose thoughts and energy are focused like a "disciplined stream" of light.
"Intuitional insights of not only personal but of a practical and professional nature would seem to be within bounds of reasonable expectations," McDonnell wrote, in reference to parapsychology.
In other words, Commander McDonnell concluded, hypnosis and astral projection are worth the Army's while.
Officers accepted into Project Center Lane underwent hypnosis and practiced reaching the so-called "astral plane," with the goal of learning foreign languages and undergoing what the documents only refer to as "habit control training."
According to one of the declassified Army files, 251 Army intelligence candidates were selected for the first year of experimentation. Of those candidates, 117 were interviewed under the impression that they were taking a survey. The document gives no specifics about the survey itself, but does indicates that the interviewer asked fairly direct questions about "psychoenergetics."
"Individuals who had objections to the military use of psychoenergetics were not considered for the final selection," the document reads. "Additionally, individuals who displayed an unreasonable enthusiasm for psychoenergetics, occult fanatics and mystical zealots were not considered for final selection."
Between 30 and 35 of the original 251 candidates were said to have "desired" traits, such as open-mindedness and intelligence, that made them suited for the program.
Intelligence officers who were accepted to the program were sent to the Monroe Institute. Officers would then listen to the "Hemi-Sync" audio. After this, one of the institute's research associates would guide intelligence officers into the astral plane, a psychic space in which the institute said that the officers supposedly could heighten their sensory experiences, heal their bodies, travel into the past or future, or even solve real-world dilemmas without the restraints of a physical body.
Another technique known as "remote viewing" was also employed upon government employees of an unknown agency, according to a declassified document from 1982. The document doesn't specifically mention the Army or the Monroe Institute, but it precisely follows the description of remote viewing which was explained in detail in a 1983 document that explicitly mentions the facility.
The goal of the psychic session was to make the subject remotely view Mars in the year 1 million B.C. According to the transcript, an interviewer read coordinates and verbal cues to a subject, who claimed to see dust storms, alien structures, and even an ancient alien race.
"Very tall, again, very large people," the unidentified subject said, according to the transcript. "But they're thin. They look thin because of their height. And they dress like—oh, hell—it's like a real light silk. But it's not flowing type of clothing. It's cut to fit. They're ancient people. They're dying. It's past their time or age. They're very philosophic about it. They're looking for a way to survive and they just can't."
I reached out to the US Army Intelligence and Security Command, the specific intelligence unit that had a relationship with the Monroe Institute, for any information about its use of Hemi-Sync technology. Ron Young, an INSCOM representative, told me in an email that after checking with the unit's division at Fort Meade, Maryland, "no records are on file for the technology you request information on."
Young pointed out that the US Army Operational Group (AOG), featured in the heading of a 1983 document, was disbanded in 1995, after which all files relevant to the Monroe Institute were transferred to whichever agency absorbed the AOG. (The files were later released under a Freedom of Information Act request.) Young couldn't tell me which agency absorbed the AOG.
I contacted the CIA about which agency absorbed the AOG, or how the CIA came into the possession of documents related to the Monroe Institute. The agency didn't have either piece of information.
I reached out to The Monroe Institute repeatedly for comment, but I was always referred to Executive Director Nancy McMoneagle, who was never available to speak with me. However, its website openly speaks about its previous relationship with the US Army.
Ray Waldkoetter, who is described as a "personnel management analyst," wrote an article for the 1991 Hemi-Sync® Journal reviewing the Army's uses of Hemi-Sync technology, which is available on the Monroe Institute's website. He wrote that the Army used Hemi-Sync technology for stress reduction, psychological counseling, and enhanced learning abilities of various levels of personnel, as well as training for people seeking officer-level positions.
"Several intensive experiences in Army military training programs have demonstrated positive results using the Hemi-Sync technology," Waldkoetter wrote.
It's unclear when exactly the Army's collaboration with the Monroe Institute came to an end, but in a Wall Street Journal article from 1994 (paywall), former INSCOM director Albert Stubblebine confirmed that the Army indeed sent intelligence officers to the Institute during the 1980s. The CIA report outlining some of these techniques is also dated January 1984.
If you want to try your hand at projecting your consciousness to a higher plane, The Monroe Institute is still operating and selling its Hemi-Sync program today. Either way, you can rest assured that the wildest conspiracy theory you can imagine is actually true.
Human consciousness is nothing but an intersection of energy planes that forms a hologram able to travel through spacetime—across the universe, and into the past, present, and future.
I read about this idea in a CIA document about the US Army. Yes, the US Army. The institution that painstakingly crafts an image of commitment to pragmatic and logical objectives. When I was reading through the documents, I was certainly a bit surprised.
According to the declassified CIA documents that I read, the US Army was extremely interested in psychic experimentation. From the late 1970s into the 80s, it even paid for intelligence officers to go on weeklong excursions to an out-of-the-way institute specializing in out-of-body experiences and astral projection.
The documents were declassified as early as 2001, but they caught my eye when they appeared in a /r/conspiracy post earlier this month. The psychic experimentation program, which was called "Project Center Lane," interviewed Army intelligence officers in order "to determine attitudes about the possible use of psychoenergetic phenomena in the intelligence field," according to the declassified CIA document from 1984.
As a huge fan of The X-Files, I couldn't resist reading as much as I could about Project Center Lane, which looks like it could have appeared on the show.
In June 1983, Army Commander Wayne M. McDonnell was asked to give his commander an assessment of the psychic services provided by the Monroe Institute, a non-profit organization focused on treatments designed to expand a person's consciousness. The Monroe Institute is known for its patented "Hemi-Sync" technology, which uses audio to synchronize the brainwaves on the left and right sides of the brain. According to the organization's website, this makes the brain vulnerable to hypnosis. McDonnell himself had completed the seven-day psychic program the month prior at the institute, which is lodged in the middle of Virginia's Blue Ridge Mountains in a town called Faber, about 30 miles east of Charlottesville.
McDonnell's assessment, collected from his experience at the secluded institute, formed the basis of a 29-page Army document that featured detailed explanations of hypnosis, holograms, and out-of-body experiences. The document placed these phenomena in the context of larger ideas of consciousness, energy, space-time, quantum subatomic particles, and so-called astral projection, a practice that aims to transport consciousness around a metaphysical plane—a central idea in McDonnell's assessment.
McDonnell cited a metaphor penned by Monroe Institute employee Melissa Jager in order to illustrate the nature of hypnosis. The metaphor says that a normal state of consciousness is like a lamp, which emits light in a "chaotic, incoherent way." However, a hypnotized state of consciousness is said to be like a laser beam, whose thoughts and energy are focused like a "disciplined stream" of light.
"Intuitional insights of not only personal but of a practical and professional nature would seem to be within bounds of reasonable expectations," McDonnell wrote, in reference to parapsychology.
In other words, Commander McDonnell concluded, hypnosis and astral projection are worth the Army's while.
Officers accepted into Project Center Lane underwent hypnosis and practiced reaching the so-called "astral plane," with the goal of learning foreign languages and undergoing what the documents only refer to as "habit control training."
According to one of the declassified Army files, 251 Army intelligence candidates were selected for the first year of experimentation. Of those candidates, 117 were interviewed under the impression that they were taking a survey. The document gives no specifics about the survey itself, but does indicates that the interviewer asked fairly direct questions about "psychoenergetics."
"Individuals who had objections to the military use of psychoenergetics were not considered for the final selection," the document reads. "Additionally, individuals who displayed an unreasonable enthusiasm for psychoenergetics, occult fanatics and mystical zealots were not considered for final selection."
Between 30 and 35 of the original 251 candidates were said to have "desired" traits, such as open-mindedness and intelligence, that made them suited for the program.
Intelligence officers who were accepted to the program were sent to the Monroe Institute. Officers would then listen to the "Hemi-Sync" audio. After this, one of the institute's research associates would guide intelligence officers into the astral plane, a psychic space in which the institute said that the officers supposedly could heighten their sensory experiences, heal their bodies, travel into the past or future, or even solve real-world dilemmas without the restraints of a physical body.
Another technique known as "remote viewing" was also employed upon government employees of an unknown agency, according to a declassified document from 1982. The document doesn't specifically mention the Army or the Monroe Institute, but it precisely follows the description of remote viewing which was explained in detail in a 1983 document that explicitly mentions the facility.
The goal of the psychic session was to make the subject remotely view Mars in the year 1 million B.C. According to the transcript, an interviewer read coordinates and verbal cues to a subject, who claimed to see dust storms, alien structures, and even an ancient alien race.
"Very tall, again, very large people," the unidentified subject said, according to the transcript. "But they're thin. They look thin because of their height. And they dress like—oh, hell—it's like a real light silk. But it's not flowing type of clothing. It's cut to fit. They're ancient people. They're dying. It's past their time or age. They're very philosophic about it. They're looking for a way to survive and they just can't."
I reached out to the US Army Intelligence and Security Command, the specific intelligence unit that had a relationship with the Monroe Institute, for any information about its use of Hemi-Sync technology. Ron Young, an INSCOM representative, told me in an email that after checking with the unit's division at Fort Meade, Maryland, "no records are on file for the technology you request information on."
Young pointed out that the US Army Operational Group (AOG), featured in the heading of a 1983 document, was disbanded in 1995, after which all files relevant to the Monroe Institute were transferred to whichever agency absorbed the AOG. (The files were later released under a Freedom of Information Act request.) Young couldn't tell me which agency absorbed the AOG.
I contacted the CIA about which agency absorbed the AOG, or how the CIA came into the possession of documents related to the Monroe Institute. The agency didn't have either piece of information.
I reached out to The Monroe Institute repeatedly for comment, but I was always referred to Executive Director Nancy McMoneagle, who was never available to speak with me. However, its website openly speaks about its previous relationship with the US Army.
Ray Waldkoetter, who is described as a "personnel management analyst," wrote an article for the 1991 Hemi-Sync® Journal reviewing the Army's uses of Hemi-Sync technology, which is available on the Monroe Institute's website. He wrote that the Army used Hemi-Sync technology for stress reduction, psychological counseling, and enhanced learning abilities of various levels of personnel, as well as training for people seeking officer-level positions.
"Several intensive experiences in Army military training programs have demonstrated positive results using the Hemi-Sync technology," Waldkoetter wrote.
*
This is hardly the first time the Army and US military at large experimented with paranormal phenomena. The branch has been known to dabble in witchcraft, psychic visions, excessive LSD, and hypnosis as an interrogation method. In 1972, the government even attempted a psychic probe of the Jupiter just months before Pioneer 10 retrieved
the first scientific data and photographs of the gaseous planet. While
we don't know which government agency conducted this probe, the document
was eventually declassified by the NSA.It's unclear when exactly the Army's collaboration with the Monroe Institute came to an end, but in a Wall Street Journal article from 1994 (paywall), former INSCOM director Albert Stubblebine confirmed that the Army indeed sent intelligence officers to the Institute during the 1980s. The CIA report outlining some of these techniques is also dated January 1984.
If you want to try your hand at projecting your consciousness to a higher plane, The Monroe Institute is still operating and selling its Hemi-Sync program today. Either way, you can rest assured that the wildest conspiracy theory you can imagine is actually true.
Labels:
astral projection,
CIA,
consciousness,
hypnosis research
Sunday, October 2, 2016
James Gleick on How Our Cultural Fascination with Time Travel Illuminates Memory, the Nature of Time, and the Central Mystery of Human Consciousness
via Brainpickings:
“Every moment alters what came before. We reach across layers of time for the memories of our memories.”
“Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom,” Bertrand Russell in 1931 as he made his beautiful case for “a largeness of contemplation” in contemplating the nature of time. “Shard by shard we are released from the tyranny of so-called time,” Patti Smith wrote nearly a century later in her magnificent meditation on time and transformation.
As a child in Bulgaria, never having heard of either Russell or Smith, one aspect of time perplexed me to the point of obsession: In my history textbooks, dates relating to significant events or historical figures of Slavic origin were listed in pairs — each had a “new style” date and an “old style” date, always thirteen days apart. So, for instance, Hristo Botev — the great revolutionary who led Bulgaria’s liberation from a five-century Ottoman slavery — was born on January 6 of 1848 according to the new style and on Christmas Day of 1847 according to the old style.
I would later learn that this was the product of the League of Nations, formed after WWI. Its Committee on Intellectual Cooperation, headed by Henri Bergson — the great French philosopher who famously opposed Einstein in a debate that changed our modern conception of time — was tasked with eradicating the Julian calendar that many countries, including Bulgaria and Russia, still used and replacing it with the Gregorian calendar as the new global standard.
This is my earliest memory of confronting the nature of time as both an abstraction humans could make with a committee and a concrete anchor of existence mooring our births, our deaths, and our entire sense of history. But most perplexing of all was the question of what happened to the people who lived through the transition — what happened to the thirteen very real days between the two fictions of the calendars. If reading history wasn’t time-travelish enough, reading about real people forced to time-travel in their real lives by an international decree was both utterly fascinating and utterly confusing. Did the person actually exist between their old-style date of birth and the new-style one — were they alive or not-yet-born? (Even today, the Wikipedia biographies of a Slavic persons from that era list both old-style and new-style dates of birth and death.) The person, of course, most definitely did exist between the day they were born and the day they died, whatever dates posterity — our living present, their unlived future — may impose on those days, now far in the past.
That thirteen-day lacuna between being and non-being was, apparently, the price of globalization. But it was also a suddenly shrill echo of an eternal question: If time bookends our existence, and if it is so easily perturbed by a calendarical convention, is it a mere abstraction?
Time is the two-headed Baskerville hound chasing us as we run for our lives — and from our lives — driven by the twain terrors of tedium and urgency. Toward what, we dare not think. Meanwhile, our information-input timelines are called “feeds.” We feast on time as time feasts on us. Time and information, if they are to be disentwined at all, dictate our lives. Is it any wonder, then, that we would rebel by trying to subjugate them in return, whether by formalizing them with our calendars or by fleeing from them with our time travel fantasies?
How those time travel fantasies originated, what technological and cultural developments fomented this distinctly modern impulse of the collective imagination, and how it illuminates our greatest anxieties is what science historian and writer extraordinaire James Gleick explores in Time Travel: A History (public library) — a grand thought experiment, using physics and philosophy as the active agents, and literature as the catalyst. Embedded in the book is a bibliography for the Babel of time — a most exquisitely annotated compendium of the body of time literature. What emerges is an inquiry, the most elegant since Borges, into why we think about time, why its directionality troubles us so, and what asking these questions at all reveals about the deepest mysteries of human consciousness and about what Gleick so beguilingly calls “the fast-expanding tapestry of interwoven ideas and facts that we call our culture.”
Gleick, who examined the origin of our modern anxiety about time with remarkable prescience nearly two decades ago, traces the invention of the notion of time travel to H.G. Wells’s 1895 masterpiece The Time Machine. Although Wells — like Gleick, like any reputable physicist — knew that time travel was a scientific impossibility, he created an aesthetic of thought which never previously existed and which has since shaped the modern consciousness. Gleick argues that the art this aesthetic produced — an entire canon of time travel literature and film — not only permeated popular culture but even influenced some of the greatest scientific minds of the past century, including Stephen Hawking, who once cleverly hosted a party for time travelers and when no one showed up considered the impossibility of time travel proven, and John Archibald Wheeler, who popularized the term “black hole” and coined “wormhole,” both key tropes of time travel literature.
Gleick considers how a scientific impossibility can become such fertile ground for the artistic imagination:
" Why do we need time travel, when we already travel through space so far and fast? For history. For mystery. For nostalgia. For hope. To examine our potential and explore our memories. To counter regret for the life we lived, the only life, one dimension, beginning to end.
Wells’s Time Machine revealed a turning in the road, an alteration in the human relationship with time. New technologies and ideas reinforced one another: the electric telegraph, the steam railroad, the earth science of Lyell and the life science of Darwin, the rise of archeology out of antiquarianism, and the perfection of clocks. When the nineteenth century turned to the twentieth, scientists and philosophers were primed to understand time in a new way. And so were we all. Time travel bloomed in the culture, its loops and twists and paradoxes."
Wells imagined time travel in an era where so much of what we take for granted was either a disorienting novelty or yet to be invented — bicycles, elevators, and balloons were new, and even the earliest visions of anything resembling the internet were half a century away. Gleick considers the direction of Wells’s imagination:
"The object of Wells’s interest, bordering on obsession, was the future — that shadowy, inaccessible place. “So with a kind of madness growing upon me, I flung myself into futurity,” says the Time Traveller. Most people, Wells wrote — “the predominant type, the type of the majority of living people” — never think about the future. Or, if they do, they regard it “as a sort of blank non-existence upon which the advancing present will presently write events.” … The more modern sort of person — “the creative, organizing, or masterful type” — sees the future as our very reason for being: “Things have been, says the legal mind, and so we are here. The creative mind says we are here because things have yet to be.”
Wells wrote his masterpiece shortly before the rise of relativity remodeled our notions of time. There was Einstein, of course. And Kurt Gödel. And Hermann Minkowski, Einstein’s teacher, whose model used four numbers (x, y, z, and t) to denote a “world point” — what we now call spacetime. Gleick writes of his legacy:
"“Mere shadows,” Minkowski said. That was not mere poetry. He meant it almost literally. Our perceived reality is a projection, like the shadows projected by the fire in Plato’s cave. If the world — the absolute world — is a four-dimensional continuum, then all that we perceive at any instant is a slice of the whole. Our sense of time: an illusion. Nothing passes; nothing changes. The universe — the real universe, hidden from our blinkered sight — comprises the totality of these timeless, eternal world lines."
But if we were able to conceive of this timeless totality — to integrate it into our conscious experience — the fantasy of time travel wouldn’t scintillate us so. A centerpiece of our temporal dissonance is one particular phenomenon of consciousness, a very palpable human experience: memory. “Perhaps memory is the time traveler’s subject,” Gleick observes. With an eye to Virginia Woolf’s memorable mediation on memory in Orlando, that supreme masterwork of time travel, he writes:
"What is memory, for a time traveler? A conundrum. We say that memory “takes us back.” Virginia Woolf called memory a seamstress “and a capricious one at that.” … “I can’t remember things before they happen,” says Alice, and the Queen retorts, “It’s a poor sort of memory that only works backwards.” Memory both is and is not our past. It is not recorded, as we sometimes imagine; it is made, and continually remade. If the time traveler meets herself, who remembers what, and when?
The question of memory, of course, is inseparable from the question of identity, for if we live in “permanent present tense,” we are incapable of stringing together the narrative out of which our sense of self arises. This continuity of selfhood, after all, is what makes you and your childhood self the “same” person despite a lifetime of physical and psychological change. Time travel presents some serious paradoxes for memory and therefore for the self. “A person’s identity,” Amin Maalouf wrote of the genes of the soul, “is like a pattern drawn on a tightly stretched parchment. Touch just one part of it, just one allegiance, and the whole person will react, the whole drum will sound.” If we could travel back to our own past and alter even a tiny speck of the pattern, we’d be changing the entire drum — our identity would have a wholly different sound. Gleick writes:
"What is the self? A question for the twentieth century to ponder, from Freud to Hofstadter and Dennett with detours through Lacan, and time travel provides some of the more profound variations on the theme. We have split personalities and alter egos galore. We have learned to doubt whether we are our younger selves, whether we will be the same person when we next look. The literature of time travel … begins to offer a way into questions that might otherwise belong to philosophers. It looks at them viscerally and naïvely — as it were, nakedly.
And so we arrive, at page 99 and no sooner, at the problem of free will. Gleick writes:
" Free will cannot be easily dismissed, because we experience it directly. We make choices. No philosopher has yet sat down in a restaurant and told the waiter, “Just bring me whatever the universe has preordained.” Then again, Einstein said that he could “will” himself to light his pipe without feeling particularly free. He liked to quote Schopenhauer… Man can do what he will, but he cannot will what he wills.
The free will problem was a sleeping giant and, without particularly meaning to, Einstein and Minkowski had prodded it awake. How literally were their followers to take the space-time continuum — the “block universe,” fixed for eternity, with our blinkered three-dimensional consciousnesses moving through it?
A century later, the question has hardly budged. And yet we live our lives with such urgency and pointedness of intent — perhaps precisely because we are unwilling to relinquish the illusion of free will. Gleick observes:
"Everywhere we look, people are pressing elevator buttons, turning doorknobs, hailing taxicabs, lifting sustenance to their lips, and begging their lovers’ favor. We act as though the future is, if not in our control, not yet settled… We would suffer illusions of free will, because, by happenstance, we tend to know less about the future than about the past.
Happenstance? Memory, self, free will — this Venn diagram of consciousness is indeed encircled by the lines we draw, often artificially, between causality and chance. (“No one’s fated or doomed to love anyone… The accidents happen,” wrote Adrienne Rich.) Gleick writes:
" All the paradoxes are time loops. They all force us to think about causality. Can an effect precede its cause? Of course not. Obviously. By definition.
[…]
But we’re not very good at understanding causes. The first person on record as trying to analyze cause and effect by power of ratiocination was Aristotle, who created layers of complexity that have caused confusion ever after. He distinguished four distinct types of causes, which can be named (making allowances for the impossibility of transmillennial translation) the efficient, the formal, the material, and the final. Some of these are hard for us to recognize as causes. The efficient cause of a sculpture is the sculptor, but the material cause is the marble. Both are needed before the sculpture can exist. The final cause is the purpose for which it is made — its beauty, let’s say… We do well to remember that nothing, when we look closely, has a single unambiguous incontrovertible cause.
Gleick reality-checks the logicians’ causal models of reality:
" If X, then Y means one thing in logic. In the physical world, it means something trickier and always (we should know by now) subject to doubt. In logic, it is rigid. In physics, there is slippage. Chance has a part to play. Accidents can happen. Uncertainty is a principle. The world is more complex than any model.
[…]
The physical laws are a construct, a convenience. They are not coextensive with the universe.
Mistaking the model for what Virginia Woolf called “the thing itself” seems to be a perennial problem of science, and one particularly integral to the perplexity of time:
" William Faulkner said, “The aim of every artist is to arrest motion, which is life, by artificial means and hold it fixed.” Scientists do that, too, and sometimes they forget they are using artificial means.
[…]
You can say the equations of physics make no distinction between past and future, between forward and backward in time. But if you do, you are averting your gaze from the phenomena dearest to our hearts. You leave for another day or another department the puzzles of evolution, memory, consciousness, life itself. Elementary processes may be reversible; complex processes are not. In the world of things, time’s arrow is always flying.
With an eye to Borges’s ideas about time, Gleick returns to the puzzlement of memory, equally not coextensive with the physics of time:
"We create memories or our memories create themselves. Consulting a memory converts it into a memory of a memory. The memories of memories, the thoughts of thoughts, blend into one another until we cannot tease them apart. Memory is recursive and self-referential. Mirrors. Mazes.
The formation of memory as a function of consciousness invites the chief religious opposition to science — a theological avoidance of the free will problem, the intellectually fragile contradictions of which Gleick captures elegantly in discussing the ideas in Isaac Asimov’s novel The End of Eternity:
" Time is a feature of creation, and the creator remains apart from it, transcendent over it. Does that mean that all our mortal time and history is, for God, a mere instant — complete and entire? For God outside of time, God in eternity, time does not pass; events do not occur step by step; cause and effect are meaningless. He is not one-thing-after-another, but all-at-once. His “now” encompasses all time. Creation is a tapestry, or an Einsteinian block universe. Either way, one might believe that God sees it entire. For Him, the story does not have a beginning, middle, and end.
But if you believe in an interventionist god, what does that leave for him to do? A changeless being is hard for us mortals to imagine. Does he act? Does he even think? Without sequential time, thought — a process — is hard to imagine. Consciousness requires time, it seems. It requires being in time. When we think, we seem to think consecutively, one thought leading to another, in timely fashion, forming memories all the while. A god outside of time would not have memories. Omniscience doesn’t require them.
But whatever pitfalls, paradoxes, and perplexities might bedevil our individual memory, they are rendered into even sharper relief in our collective memory — nowhere more so than in the curious human obsession with time capsules, the grandest of which is the Golden Record that sailed into space aboard the Voyager in 1977, a civilizational labor of love dreamt up and rendered real by Carl Sagan and Annie Druyan that was also the record of their own love story.
Gleick considers what this strange millennia-old practice, this “prosthetic memory,” reveals about human nature:
" When people make time capsules, they disregard a vital fact of human history. Over the millennia — slowly at first and then with gathering speed — we have evolved a collective methodology for saving information about our lives and times and transmitting that information into the future. We call it, for short, culture.
First came songs, clay pots, drawings on cave walls. Then tablets and scrolls, paintings and books. Knots in alpaca threads, recording Incan calendar data and tax receipts. These are external memory, extensions of our biological selves. Mental prostheses. Then came repositories for the preservation of these items: libraries, monasteries, museums; also theater troupes and orchestras. They may consider their mission to be entertainment or spiritual practice or the celebration of beauty, but meanwhile they transmit our symbolic memory across the generations. We can recognize these institutions of culture as distributed storage and retrieval systems. The machinery is unreliable — disorganized and discontinuous, prone to failures and omissions. They use code. They require deciphering. Then again, whether made of stone, paper, or silicon, the technology of culture has a durability that the biological originals can only dream of. This is how we tell our descendants who we were. By contrast, the recent smattering of time capsules is an oddball sideshow.
Building on the ideas he examined in his indispensable biography of information, Gleick adds:
" As for knowledge itself, that is our stock in trade. When the Library of Alexandria burned, it was one of a kind. Now there are hundreds of thousands, and they are crammed to overflowing. We have developed a species memory. We leave our marks everywhere.
[…]
When people fill time capsules they are trying to stop the clock — take stock, freeze the now, arrest the incessant head-over-heels stampede into the future. The past appears fixed, but memory, the fact of it, or the process, is always in motion. That applies to our prosthetic global memory as well as the biological version. When the Library of Congress promises to archive every tweet, does it create a Borgesian paradox in real time or a giant burial chamber in progress?
Because time has this unsilenceable undertone reminding us of our morality, we grasp onto it — onto this intangible abstraction — the way we grasp onto material possessions, commodities, and all the other tangibilia by which we sustain our illusions of permanence in a universe dominated by impermanence and constant flux. From this angle, Gleick revisits the tenet that all paradoxes are time-loops:
" Once we conceive of time as a quantity, we can store it up, apparently. We save it, spend it, accumulate it, and bank it. We do all this quite obsessively nowadays, but the notion is at least four hundred years old. Francis Bacon, 1612: “To choose Time, is to save Time.” The corollary of saving time is wasting it.
[…]
We go back and forth between being time’s master and its victim. Time is ours to use, and then we are at its mercy. I wasted time, and now doth time waste me, says Richard II; For now hath time made me his numbering clock. If you say that an activity wastes time, implying a substance in finite supply, and then you say that it fills time, implying a sort of container, have you contradicted yourself? Are you confused? Are you committing a failure of logic? None of those. On the contrary, you are a clever creature, when it comes to time, and you can keep more than one idea in your head. Language is imperfect; poetry, perfectly imperfect. We can occupy the time and pass the time in the same breath. We can devour time or languish in its slow-chapp’d power.
Still, memory remains. The key to understanding time, Gleick suggests, lies in understanding memory — understanding the dialogue, often dissonant, between the experiencing self and the remembering self. He writes:
" The universe does what it does. We perceive change, perceive motion, and try to make sense of the teeming, blooming confusion. The hard problem, in other words, is consciousness. We’re back where we started, with Wells’s Time Traveller, insisting that the only difference between time and space is that “our consciousness moves along it,” just before Einstein and Minkowski said the same. Physicists have developed a love-hate relationship with the problem of the self. On the one hand it’s none of their business — leave it to the (mere) psychologists. On the other hand, trying to extricate the observer — the measurer, the accumulator of information — from the cool description of nature has turned out to be impossible. Our consciousness is not some magical onlooker; it is a part of the universe it tries to contemplate.
The mind is what we experience most immediately and what does the experiencing. It is subject to the arrow of time. It creates memories as it goes. It models the world and continually compares these models with their predecessors. Whatever consciousness will turn out to be, it’s not a moving flashlight illuminating successive slices of the four-dimensional space-time continuum. It is a dynamical system, occurring in time, evolving in time, able to absorb bits of information from the past and process them, and able as well to create anticipation for the future.
[…]
What is time? Things change, and time is how we keep track.
This act of keeping track, which is largely a matter of telling the present from the past, is what Gleick considers the key question of consciousness and the pillar of our very sense of self:
"How do we construct the self? Can there be memory without consciousness? Obviously not. Or obviously. It depends what you mean by memory. A rat learns to run a maze — does it remember the maze? If memory is the perpetuation of information, then the least conscious of organisms possess it. So do computers, whose memory we measure in bytes. So does a gravestone. But if memory is the action of recollection, the act of remembrance, then it implies an ability to hold in the mind two constructs, one representing the present and another representing the past, and to compare them, one against the other. How did we learn to distinguish memory from experience? When something misfires and we experience the present as if it were a memory, we call that déjà vu. Considering déjà vu — an illusion or pathology — we might marvel at the ordinary business of remembering.
This dizzying tour of science, philosophy, and their interaction with literature is leading me to wonder: When a machine hums, does it hear or notice the hum? Could it be that time is the hum of consciousness?
Perhaps time is so troublesome because it foists upon us our perennial fear of missing out. Time travel, Gleick argues, is such an alluring fantasy precisely because it bridges the infinite possibility of life with the realm of the probable — by traveling in time, we get to live the myriad unlived lives which we are doomed to never experience under the physical laws of this one and only life we’ve been allotted. He captures this with uncompromising precision:
"If we have only the one universe — if the universe is all there is — then time murders possibility. It erases the lives we might have had.
Time travel, then, is a thought experiment performed in the petri dish of existence itself, catalyzing its most elemental and disquieting questions. In a reframing of the central idea of the Butterfly Effect — a term Gleick himself wrested from the esoteric lexicon of meteorology and embedded in the popular imagination in 1987 with his groundbreaking first book, Chaos, which created an aesthetic for the history of science much like Wells created an aesthetic for time travel literature — he considers the logical loops of changing any one element of history, which ripples across all of being:
" We have to ask these questions, don’t we? Is the world we have the only world possible? Could everything have turned out differently? What if you could not only kill Hitler and see what happens, but you could go back again and again, making improvements, tweaking the timeline, like the weatherman Phil (Bill Murray) in one of the greatest of all time-travel movies, reliving Groundhog Day until finally he gets it right.
Is this the best of all possible worlds? If you had a time machine, would you kill Hitler?
And so we arrive at the answer to the central question:
" Why do we need time travel? All the answers come down to one. To elude death.
Time is a killer. Everyone knows that. Time will bury us. I wasted time, and now doth time waste me. Time makes dust of all things. Time’s winged chariot isn’t taking us anywhere good.
How aptly named, the time beyond death: the Hereafter.
But even death is strewn with the temporal asymmetry of our anxieties, which Montaigne articulated brilliantly half a millennium ago as he contemplated death and the art of living: “To lament that we shall not be alive a hundred years hence, is the same folly as to be sorry we were not alive a hundred years ago.” And yet we do dread death with infinitely greater intensity than we dread, if that’s even the appropriate term, not having lived before our birth. If the arrow of time is one-directional, so is the arrow of time-anxiety. But Gleick subverts Montaigne and delivers a sublime summation of the paradoxical impulse at the heart of our time travel yearnings:
" You lived; you will always have lived. Death does not erase your life. It is mere punctuation. If only time could be seen whole, then you could see the past remaining intact, instead of vanishing in the rearview mirror. There is your immortality. Frozen in amber.
For me the price of denying death in this way is denying life.
Barring denial, our only recourse is to surrender our memory, our consciousness, our very selves to the flow of time. To borrow Sarah Manguso’s piercing observation, “time punishes us by taking everything, but it also saves us — by taking everything.” Gleick writes:
" When the future vanishes into the past so quickly, what remains is a kind of atemporality, a present tense in which temporal order feels as arbitrary as alphabetical order. We say that the present is real—yet it flows through our fingers like quicksilver.
[…]
It might be fair to say that all we perceive is change — that any sense of stasis is a constructed illusion. Every moment alters what came before. We reach across layers of time for the memories of our memories.
“Every moment alters what came before. We reach across layers of time for the memories of our memories.”
“Both in thought and in feeling, even though time be real, to realise the unimportance of time is the gate of wisdom,” Bertrand Russell in 1931 as he made his beautiful case for “a largeness of contemplation” in contemplating the nature of time. “Shard by shard we are released from the tyranny of so-called time,” Patti Smith wrote nearly a century later in her magnificent meditation on time and transformation.
As a child in Bulgaria, never having heard of either Russell or Smith, one aspect of time perplexed me to the point of obsession: In my history textbooks, dates relating to significant events or historical figures of Slavic origin were listed in pairs — each had a “new style” date and an “old style” date, always thirteen days apart. So, for instance, Hristo Botev — the great revolutionary who led Bulgaria’s liberation from a five-century Ottoman slavery — was born on January 6 of 1848 according to the new style and on Christmas Day of 1847 according to the old style.
I would later learn that this was the product of the League of Nations, formed after WWI. Its Committee on Intellectual Cooperation, headed by Henri Bergson — the great French philosopher who famously opposed Einstein in a debate that changed our modern conception of time — was tasked with eradicating the Julian calendar that many countries, including Bulgaria and Russia, still used and replacing it with the Gregorian calendar as the new global standard.
This is my earliest memory of confronting the nature of time as both an abstraction humans could make with a committee and a concrete anchor of existence mooring our births, our deaths, and our entire sense of history. But most perplexing of all was the question of what happened to the people who lived through the transition — what happened to the thirteen very real days between the two fictions of the calendars. If reading history wasn’t time-travelish enough, reading about real people forced to time-travel in their real lives by an international decree was both utterly fascinating and utterly confusing. Did the person actually exist between their old-style date of birth and the new-style one — were they alive or not-yet-born? (Even today, the Wikipedia biographies of a Slavic persons from that era list both old-style and new-style dates of birth and death.) The person, of course, most definitely did exist between the day they were born and the day they died, whatever dates posterity — our living present, their unlived future — may impose on those days, now far in the past.
That thirteen-day lacuna between being and non-being was, apparently, the price of globalization. But it was also a suddenly shrill echo of an eternal question: If time bookends our existence, and if it is so easily perturbed by a calendarical convention, is it a mere abstraction?
Time is the two-headed Baskerville hound chasing us as we run for our lives — and from our lives — driven by the twain terrors of tedium and urgency. Toward what, we dare not think. Meanwhile, our information-input timelines are called “feeds.” We feast on time as time feasts on us. Time and information, if they are to be disentwined at all, dictate our lives. Is it any wonder, then, that we would rebel by trying to subjugate them in return, whether by formalizing them with our calendars or by fleeing from them with our time travel fantasies?
How those time travel fantasies originated, what technological and cultural developments fomented this distinctly modern impulse of the collective imagination, and how it illuminates our greatest anxieties is what science historian and writer extraordinaire James Gleick explores in Time Travel: A History (public library) — a grand thought experiment, using physics and philosophy as the active agents, and literature as the catalyst. Embedded in the book is a bibliography for the Babel of time — a most exquisitely annotated compendium of the body of time literature. What emerges is an inquiry, the most elegant since Borges, into why we think about time, why its directionality troubles us so, and what asking these questions at all reveals about the deepest mysteries of human consciousness and about what Gleick so beguilingly calls “the fast-expanding tapestry of interwoven ideas and facts that we call our culture.”
Gleick, who examined the origin of our modern anxiety about time with remarkable prescience nearly two decades ago, traces the invention of the notion of time travel to H.G. Wells’s 1895 masterpiece The Time Machine. Although Wells — like Gleick, like any reputable physicist — knew that time travel was a scientific impossibility, he created an aesthetic of thought which never previously existed and which has since shaped the modern consciousness. Gleick argues that the art this aesthetic produced — an entire canon of time travel literature and film — not only permeated popular culture but even influenced some of the greatest scientific minds of the past century, including Stephen Hawking, who once cleverly hosted a party for time travelers and when no one showed up considered the impossibility of time travel proven, and John Archibald Wheeler, who popularized the term “black hole” and coined “wormhole,” both key tropes of time travel literature.
Gleick considers how a scientific impossibility can become such fertile ground for the artistic imagination:
" Why do we need time travel, when we already travel through space so far and fast? For history. For mystery. For nostalgia. For hope. To examine our potential and explore our memories. To counter regret for the life we lived, the only life, one dimension, beginning to end.
Wells’s Time Machine revealed a turning in the road, an alteration in the human relationship with time. New technologies and ideas reinforced one another: the electric telegraph, the steam railroad, the earth science of Lyell and the life science of Darwin, the rise of archeology out of antiquarianism, and the perfection of clocks. When the nineteenth century turned to the twentieth, scientists and philosophers were primed to understand time in a new way. And so were we all. Time travel bloomed in the culture, its loops and twists and paradoxes."
Wells imagined time travel in an era where so much of what we take for granted was either a disorienting novelty or yet to be invented — bicycles, elevators, and balloons were new, and even the earliest visions of anything resembling the internet were half a century away. Gleick considers the direction of Wells’s imagination:
"The object of Wells’s interest, bordering on obsession, was the future — that shadowy, inaccessible place. “So with a kind of madness growing upon me, I flung myself into futurity,” says the Time Traveller. Most people, Wells wrote — “the predominant type, the type of the majority of living people” — never think about the future. Or, if they do, they regard it “as a sort of blank non-existence upon which the advancing present will presently write events.” … The more modern sort of person — “the creative, organizing, or masterful type” — sees the future as our very reason for being: “Things have been, says the legal mind, and so we are here. The creative mind says we are here because things have yet to be.”
Wells wrote his masterpiece shortly before the rise of relativity remodeled our notions of time. There was Einstein, of course. And Kurt Gödel. And Hermann Minkowski, Einstein’s teacher, whose model used four numbers (x, y, z, and t) to denote a “world point” — what we now call spacetime. Gleick writes of his legacy:
"“Mere shadows,” Minkowski said. That was not mere poetry. He meant it almost literally. Our perceived reality is a projection, like the shadows projected by the fire in Plato’s cave. If the world — the absolute world — is a four-dimensional continuum, then all that we perceive at any instant is a slice of the whole. Our sense of time: an illusion. Nothing passes; nothing changes. The universe — the real universe, hidden from our blinkered sight — comprises the totality of these timeless, eternal world lines."
But if we were able to conceive of this timeless totality — to integrate it into our conscious experience — the fantasy of time travel wouldn’t scintillate us so. A centerpiece of our temporal dissonance is one particular phenomenon of consciousness, a very palpable human experience: memory. “Perhaps memory is the time traveler’s subject,” Gleick observes. With an eye to Virginia Woolf’s memorable mediation on memory in Orlando, that supreme masterwork of time travel, he writes:
"What is memory, for a time traveler? A conundrum. We say that memory “takes us back.” Virginia Woolf called memory a seamstress “and a capricious one at that.” … “I can’t remember things before they happen,” says Alice, and the Queen retorts, “It’s a poor sort of memory that only works backwards.” Memory both is and is not our past. It is not recorded, as we sometimes imagine; it is made, and continually remade. If the time traveler meets herself, who remembers what, and when?
The question of memory, of course, is inseparable from the question of identity, for if we live in “permanent present tense,” we are incapable of stringing together the narrative out of which our sense of self arises. This continuity of selfhood, after all, is what makes you and your childhood self the “same” person despite a lifetime of physical and psychological change. Time travel presents some serious paradoxes for memory and therefore for the self. “A person’s identity,” Amin Maalouf wrote of the genes of the soul, “is like a pattern drawn on a tightly stretched parchment. Touch just one part of it, just one allegiance, and the whole person will react, the whole drum will sound.” If we could travel back to our own past and alter even a tiny speck of the pattern, we’d be changing the entire drum — our identity would have a wholly different sound. Gleick writes:
"What is the self? A question for the twentieth century to ponder, from Freud to Hofstadter and Dennett with detours through Lacan, and time travel provides some of the more profound variations on the theme. We have split personalities and alter egos galore. We have learned to doubt whether we are our younger selves, whether we will be the same person when we next look. The literature of time travel … begins to offer a way into questions that might otherwise belong to philosophers. It looks at them viscerally and naïvely — as it were, nakedly.
And so we arrive, at page 99 and no sooner, at the problem of free will. Gleick writes:
" Free will cannot be easily dismissed, because we experience it directly. We make choices. No philosopher has yet sat down in a restaurant and told the waiter, “Just bring me whatever the universe has preordained.” Then again, Einstein said that he could “will” himself to light his pipe without feeling particularly free. He liked to quote Schopenhauer… Man can do what he will, but he cannot will what he wills.
The free will problem was a sleeping giant and, without particularly meaning to, Einstein and Minkowski had prodded it awake. How literally were their followers to take the space-time continuum — the “block universe,” fixed for eternity, with our blinkered three-dimensional consciousnesses moving through it?
A century later, the question has hardly budged. And yet we live our lives with such urgency and pointedness of intent — perhaps precisely because we are unwilling to relinquish the illusion of free will. Gleick observes:
"Everywhere we look, people are pressing elevator buttons, turning doorknobs, hailing taxicabs, lifting sustenance to their lips, and begging their lovers’ favor. We act as though the future is, if not in our control, not yet settled… We would suffer illusions of free will, because, by happenstance, we tend to know less about the future than about the past.
Happenstance? Memory, self, free will — this Venn diagram of consciousness is indeed encircled by the lines we draw, often artificially, between causality and chance. (“No one’s fated or doomed to love anyone… The accidents happen,” wrote Adrienne Rich.) Gleick writes:
" All the paradoxes are time loops. They all force us to think about causality. Can an effect precede its cause? Of course not. Obviously. By definition.
[…]
But we’re not very good at understanding causes. The first person on record as trying to analyze cause and effect by power of ratiocination was Aristotle, who created layers of complexity that have caused confusion ever after. He distinguished four distinct types of causes, which can be named (making allowances for the impossibility of transmillennial translation) the efficient, the formal, the material, and the final. Some of these are hard for us to recognize as causes. The efficient cause of a sculpture is the sculptor, but the material cause is the marble. Both are needed before the sculpture can exist. The final cause is the purpose for which it is made — its beauty, let’s say… We do well to remember that nothing, when we look closely, has a single unambiguous incontrovertible cause.
Gleick reality-checks the logicians’ causal models of reality:
" If X, then Y means one thing in logic. In the physical world, it means something trickier and always (we should know by now) subject to doubt. In logic, it is rigid. In physics, there is slippage. Chance has a part to play. Accidents can happen. Uncertainty is a principle. The world is more complex than any model.
[…]
The physical laws are a construct, a convenience. They are not coextensive with the universe.
Mistaking the model for what Virginia Woolf called “the thing itself” seems to be a perennial problem of science, and one particularly integral to the perplexity of time:
" William Faulkner said, “The aim of every artist is to arrest motion, which is life, by artificial means and hold it fixed.” Scientists do that, too, and sometimes they forget they are using artificial means.
[…]
You can say the equations of physics make no distinction between past and future, between forward and backward in time. But if you do, you are averting your gaze from the phenomena dearest to our hearts. You leave for another day or another department the puzzles of evolution, memory, consciousness, life itself. Elementary processes may be reversible; complex processes are not. In the world of things, time’s arrow is always flying.
With an eye to Borges’s ideas about time, Gleick returns to the puzzlement of memory, equally not coextensive with the physics of time:
"We create memories or our memories create themselves. Consulting a memory converts it into a memory of a memory. The memories of memories, the thoughts of thoughts, blend into one another until we cannot tease them apart. Memory is recursive and self-referential. Mirrors. Mazes.
The formation of memory as a function of consciousness invites the chief religious opposition to science — a theological avoidance of the free will problem, the intellectually fragile contradictions of which Gleick captures elegantly in discussing the ideas in Isaac Asimov’s novel The End of Eternity:
" Time is a feature of creation, and the creator remains apart from it, transcendent over it. Does that mean that all our mortal time and history is, for God, a mere instant — complete and entire? For God outside of time, God in eternity, time does not pass; events do not occur step by step; cause and effect are meaningless. He is not one-thing-after-another, but all-at-once. His “now” encompasses all time. Creation is a tapestry, or an Einsteinian block universe. Either way, one might believe that God sees it entire. For Him, the story does not have a beginning, middle, and end.
But if you believe in an interventionist god, what does that leave for him to do? A changeless being is hard for us mortals to imagine. Does he act? Does he even think? Without sequential time, thought — a process — is hard to imagine. Consciousness requires time, it seems. It requires being in time. When we think, we seem to think consecutively, one thought leading to another, in timely fashion, forming memories all the while. A god outside of time would not have memories. Omniscience doesn’t require them.
But whatever pitfalls, paradoxes, and perplexities might bedevil our individual memory, they are rendered into even sharper relief in our collective memory — nowhere more so than in the curious human obsession with time capsules, the grandest of which is the Golden Record that sailed into space aboard the Voyager in 1977, a civilizational labor of love dreamt up and rendered real by Carl Sagan and Annie Druyan that was also the record of their own love story.
Gleick considers what this strange millennia-old practice, this “prosthetic memory,” reveals about human nature:
" When people make time capsules, they disregard a vital fact of human history. Over the millennia — slowly at first and then with gathering speed — we have evolved a collective methodology for saving information about our lives and times and transmitting that information into the future. We call it, for short, culture.
First came songs, clay pots, drawings on cave walls. Then tablets and scrolls, paintings and books. Knots in alpaca threads, recording Incan calendar data and tax receipts. These are external memory, extensions of our biological selves. Mental prostheses. Then came repositories for the preservation of these items: libraries, monasteries, museums; also theater troupes and orchestras. They may consider their mission to be entertainment or spiritual practice or the celebration of beauty, but meanwhile they transmit our symbolic memory across the generations. We can recognize these institutions of culture as distributed storage and retrieval systems. The machinery is unreliable — disorganized and discontinuous, prone to failures and omissions. They use code. They require deciphering. Then again, whether made of stone, paper, or silicon, the technology of culture has a durability that the biological originals can only dream of. This is how we tell our descendants who we were. By contrast, the recent smattering of time capsules is an oddball sideshow.
Building on the ideas he examined in his indispensable biography of information, Gleick adds:
" As for knowledge itself, that is our stock in trade. When the Library of Alexandria burned, it was one of a kind. Now there are hundreds of thousands, and they are crammed to overflowing. We have developed a species memory. We leave our marks everywhere.
[…]
When people fill time capsules they are trying to stop the clock — take stock, freeze the now, arrest the incessant head-over-heels stampede into the future. The past appears fixed, but memory, the fact of it, or the process, is always in motion. That applies to our prosthetic global memory as well as the biological version. When the Library of Congress promises to archive every tweet, does it create a Borgesian paradox in real time or a giant burial chamber in progress?
Because time has this unsilenceable undertone reminding us of our morality, we grasp onto it — onto this intangible abstraction — the way we grasp onto material possessions, commodities, and all the other tangibilia by which we sustain our illusions of permanence in a universe dominated by impermanence and constant flux. From this angle, Gleick revisits the tenet that all paradoxes are time-loops:
" Once we conceive of time as a quantity, we can store it up, apparently. We save it, spend it, accumulate it, and bank it. We do all this quite obsessively nowadays, but the notion is at least four hundred years old. Francis Bacon, 1612: “To choose Time, is to save Time.” The corollary of saving time is wasting it.
[…]
We go back and forth between being time’s master and its victim. Time is ours to use, and then we are at its mercy. I wasted time, and now doth time waste me, says Richard II; For now hath time made me his numbering clock. If you say that an activity wastes time, implying a substance in finite supply, and then you say that it fills time, implying a sort of container, have you contradicted yourself? Are you confused? Are you committing a failure of logic? None of those. On the contrary, you are a clever creature, when it comes to time, and you can keep more than one idea in your head. Language is imperfect; poetry, perfectly imperfect. We can occupy the time and pass the time in the same breath. We can devour time or languish in its slow-chapp’d power.
Still, memory remains. The key to understanding time, Gleick suggests, lies in understanding memory — understanding the dialogue, often dissonant, between the experiencing self and the remembering self. He writes:
" The universe does what it does. We perceive change, perceive motion, and try to make sense of the teeming, blooming confusion. The hard problem, in other words, is consciousness. We’re back where we started, with Wells’s Time Traveller, insisting that the only difference between time and space is that “our consciousness moves along it,” just before Einstein and Minkowski said the same. Physicists have developed a love-hate relationship with the problem of the self. On the one hand it’s none of their business — leave it to the (mere) psychologists. On the other hand, trying to extricate the observer — the measurer, the accumulator of information — from the cool description of nature has turned out to be impossible. Our consciousness is not some magical onlooker; it is a part of the universe it tries to contemplate.
The mind is what we experience most immediately and what does the experiencing. It is subject to the arrow of time. It creates memories as it goes. It models the world and continually compares these models with their predecessors. Whatever consciousness will turn out to be, it’s not a moving flashlight illuminating successive slices of the four-dimensional space-time continuum. It is a dynamical system, occurring in time, evolving in time, able to absorb bits of information from the past and process them, and able as well to create anticipation for the future.
[…]
What is time? Things change, and time is how we keep track.
This act of keeping track, which is largely a matter of telling the present from the past, is what Gleick considers the key question of consciousness and the pillar of our very sense of self:
"How do we construct the self? Can there be memory without consciousness? Obviously not. Or obviously. It depends what you mean by memory. A rat learns to run a maze — does it remember the maze? If memory is the perpetuation of information, then the least conscious of organisms possess it. So do computers, whose memory we measure in bytes. So does a gravestone. But if memory is the action of recollection, the act of remembrance, then it implies an ability to hold in the mind two constructs, one representing the present and another representing the past, and to compare them, one against the other. How did we learn to distinguish memory from experience? When something misfires and we experience the present as if it were a memory, we call that déjà vu. Considering déjà vu — an illusion or pathology — we might marvel at the ordinary business of remembering.
This dizzying tour of science, philosophy, and their interaction with literature is leading me to wonder: When a machine hums, does it hear or notice the hum? Could it be that time is the hum of consciousness?
Perhaps time is so troublesome because it foists upon us our perennial fear of missing out. Time travel, Gleick argues, is such an alluring fantasy precisely because it bridges the infinite possibility of life with the realm of the probable — by traveling in time, we get to live the myriad unlived lives which we are doomed to never experience under the physical laws of this one and only life we’ve been allotted. He captures this with uncompromising precision:
"If we have only the one universe — if the universe is all there is — then time murders possibility. It erases the lives we might have had.
Time travel, then, is a thought experiment performed in the petri dish of existence itself, catalyzing its most elemental and disquieting questions. In a reframing of the central idea of the Butterfly Effect — a term Gleick himself wrested from the esoteric lexicon of meteorology and embedded in the popular imagination in 1987 with his groundbreaking first book, Chaos, which created an aesthetic for the history of science much like Wells created an aesthetic for time travel literature — he considers the logical loops of changing any one element of history, which ripples across all of being:
" We have to ask these questions, don’t we? Is the world we have the only world possible? Could everything have turned out differently? What if you could not only kill Hitler and see what happens, but you could go back again and again, making improvements, tweaking the timeline, like the weatherman Phil (Bill Murray) in one of the greatest of all time-travel movies, reliving Groundhog Day until finally he gets it right.
Is this the best of all possible worlds? If you had a time machine, would you kill Hitler?
And so we arrive at the answer to the central question:
" Why do we need time travel? All the answers come down to one. To elude death.
Time is a killer. Everyone knows that. Time will bury us. I wasted time, and now doth time waste me. Time makes dust of all things. Time’s winged chariot isn’t taking us anywhere good.
How aptly named, the time beyond death: the Hereafter.
But even death is strewn with the temporal asymmetry of our anxieties, which Montaigne articulated brilliantly half a millennium ago as he contemplated death and the art of living: “To lament that we shall not be alive a hundred years hence, is the same folly as to be sorry we were not alive a hundred years ago.” And yet we do dread death with infinitely greater intensity than we dread, if that’s even the appropriate term, not having lived before our birth. If the arrow of time is one-directional, so is the arrow of time-anxiety. But Gleick subverts Montaigne and delivers a sublime summation of the paradoxical impulse at the heart of our time travel yearnings:
" You lived; you will always have lived. Death does not erase your life. It is mere punctuation. If only time could be seen whole, then you could see the past remaining intact, instead of vanishing in the rearview mirror. There is your immortality. Frozen in amber.
For me the price of denying death in this way is denying life.
Barring denial, our only recourse is to surrender our memory, our consciousness, our very selves to the flow of time. To borrow Sarah Manguso’s piercing observation, “time punishes us by taking everything, but it also saves us — by taking everything.” Gleick writes:
" When the future vanishes into the past so quickly, what remains is a kind of atemporality, a present tense in which temporal order feels as arbitrary as alphabetical order. We say that the present is real—yet it flows through our fingers like quicksilver.
[…]
It might be fair to say that all we perceive is change — that any sense of stasis is a constructed illusion. Every moment alters what came before. We reach across layers of time for the memories of our memories.
Saturday, July 16, 2016
Man Missing Most Of His Brain Challenges Everything We Thought We Knew About Consciousness
via IFLScience
Back in 2007, scientists reported that a French man in his mid-40s had walked into a clinic complaining of a pain in his leg. As a child, he’d had this same problem as a result of the ventricles in his brain filling with cerebrospinal fluid, so the doctors decided to scan his brain to see if this was again causing his limb-related lamentations. To their astonishment, they found that his ventricles had become so swollen with fluid that they’d replaced virtually his entire brain, leaving just a thin cortical layer of neurons.
Yet miraculously, the man was not only fully conscious, but lived a rich and unhindered life, working as a civil servant and living with his wife and two kids, blissfully unaware of the gaping hole in his brain. His ability to function without so many of the key brain regions previously considered vital for consciousness raises some major questions about existing theories regarding how the brain works and the mechanisms underlying our awareness.
For example, neuroscientists have often asserted that a brain region called the thalamus, which relays sensory signals to the cerebral cortex, is indispensable for consciousness. This is because research has indicated that damage to the thalamus often causes people to fall into a coma, while one team of scientists were even able to manually “switch off” an epileptic patient’s consciousness by electrically stimulating this brain region.
Similarly, researchers have shown that it is possible to cause people to lose consciousness by using electrodes to manipulate the activity of a brain region called the claustrum, which receives input from a wide variety of brain areas and communicates extensively with the thalamus.
Back in 2007, scientists reported that a French man in his mid-40s had walked into a clinic complaining of a pain in his leg. As a child, he’d had this same problem as a result of the ventricles in his brain filling with cerebrospinal fluid, so the doctors decided to scan his brain to see if this was again causing his limb-related lamentations. To their astonishment, they found that his ventricles had become so swollen with fluid that they’d replaced virtually his entire brain, leaving just a thin cortical layer of neurons.
Yet miraculously, the man was not only fully conscious, but lived a rich and unhindered life, working as a civil servant and living with his wife and two kids, blissfully unaware of the gaping hole in his brain. His ability to function without so many of the key brain regions previously considered vital for consciousness raises some major questions about existing theories regarding how the brain works and the mechanisms underlying our awareness.
For example, neuroscientists have often asserted that a brain region called the thalamus, which relays sensory signals to the cerebral cortex, is indispensable for consciousness. This is because research has indicated that damage to the thalamus often causes people to fall into a coma, while one team of scientists were even able to manually “switch off” an epileptic patient’s consciousness by electrically stimulating this brain region.
Similarly, researchers have shown that it is possible to cause people to lose consciousness by using electrodes to manipulate the activity of a brain region called the claustrum, which receives input from a wide variety of brain areas and communicates extensively with the thalamus.
Labels:
brain,
civil servant,
consciousness,
missing,
neuroscience
Wednesday, April 6, 2016
Psychic Abilities and the Illusion of Separation
via IONS
One of the key missions at IONS is to bring awareness of and engagement within the noetic sciences to younger generations by supporting emerging scholars and explorers. As part of this effort, Chief Scientist Dean Radin recently gave a talk at the California Institute of Integral Studies (CIIS), an innovative university in San Francisco that offers accredited programs in counseling psychology, clinical psychology, consciousness, and transformation.
Below is an outline of Dean's presentation, entitled Psychic Abilities and the Illusion of Separation.
0:00 Illusions and our perception of reality
3:52 Classical and quantum reality
5:48 Hints about reality
8:50 Going beyond the usual senses
10:17 Telepathy experiments using the ganzfeld
22:52 Replication by skeptical scientists
25:26 Precognition studies
41:25 Strong evidence of precognitive psi
43:20 Controversy and skepticism
44:16 Consciousness as fundamental?
One of the key missions at IONS is to bring awareness of and engagement within the noetic sciences to younger generations by supporting emerging scholars and explorers. As part of this effort, Chief Scientist Dean Radin recently gave a talk at the California Institute of Integral Studies (CIIS), an innovative university in San Francisco that offers accredited programs in counseling psychology, clinical psychology, consciousness, and transformation.
Below is an outline of Dean's presentation, entitled Psychic Abilities and the Illusion of Separation.
0:00 Illusions and our perception of reality
3:52 Classical and quantum reality
5:48 Hints about reality
8:50 Going beyond the usual senses
10:17 Telepathy experiments using the ganzfeld
22:52 Replication by skeptical scientists
25:26 Precognition studies
41:25 Strong evidence of precognitive psi
43:20 Controversy and skepticism
44:16 Consciousness as fundamental?
Labels:
abilities,
consciousness,
dean radin,
ganzfeld,
IONS,
noetics,
psychic
Monday, November 9, 2015
The Science of Happiness: Why complaining is literally killing you
via curious apes
Sometimes in life, all the experience and knowledge simmering around in that ol’ consciousness of ours combines itself in a way that suddenly causes the cerebral clockwork to click into place, and in this fluid flow of thought we find an epiphany rising to the surface.
One such point for me came in my junior year at University. It changed the way I viewed the world forever as it catapulted me out of the last of my angsty, melancholic youth and onto a path of ever-increasing bliss. Sounds like I’m verging on feeding you some new-agey, mumbo-jumbo, doesn’t it? Well, bear with me, because I assure you the point here is to add some logical evidence to the ol’ cliches, to give you what I would consider my Science of Happiness.
At the time of this personal discovery, I was pursuing a double-major in Computer Science and Psychology. Aside from these declared interest, I also had an affinity for (Eastern) Philosophy and Neuroscience. This led to semester course load comprising of two 300-level psychology courses, one 300-level philosophy course, and a graduate-level artificial intelligence course for both biology and computer science majors. This amalgamation of studies quickly tore my brain into a dozen directions, and when I put the pieces back together, I found myself resolute with rational reasons for optimism and for removing from my life the people who liked to complain.
1. “Synapses that fire together wire together.”
This was the first phrase my AI professor told the classroom, and to this day it is still one of the most profound bits of logic I hold onto in order to dictate the decisions of my life. The principle is simple: Throughout your brain there is a collection of synapses separated by empty space called the synaptic cleft. Whenever you have a thought, one synapse shoots a chemical across the cleft to another synapse, thus building a bridge over which an electric signal can cross, carrying along its charge the relevant information you’re thinking about. It’s very similar to how nerves carry electric from the sensation in your toe all the way up to your brain where it’s actually “felt”.
Here’s the kicker: Every time this electrical charge is triggered, the synapses grow closer together in order to decrease the distance the electrical charge has to cross. This is a microcosmic example of evolution, of adaptation. The brain is rewiring its own circuitry, physically changing itself, to make it easier and more likely that the proper synapses will share the chemical link and thus spark together–in essence, making it easier for the thought to trigger. Therefore, your first mystical scientific evidence: your thoughts reshape your brain, and thus are changing a physical construct of reality. Let that sink in for a moment before you continue, because that’s a seriously profound logic-bomb right there.
Your thoughts reshape your brain, and thus are changing a physical construct of reality.
Okay, pull yourself together, cause we’re not done yet.
2. Shortest Path Wins the Race.
Beyond the absolutely incredible fact that your brain is always doing this, consistently shifting and morphing with every thought, even more exciting is the fact that the synapses you’ve most strongly bonded together (by thinking about more frequently) come to represent your default personality: your intelligence, skills, aptitudes, and most easily accessible thoughts(which are more-or-less the source of your conversation skills).
Let’s dig deeper into the logic behind that. Consider you have two pairs of people throwing a ball back and forth. One pair stands ten feet apart, the other at a distance of 100 feet. One partner from each team throws their ball to their respective partners at the exact same moment with the exact same speed. The first team that catches the ball gets to dictate your personal decision and mental state of mind.
So which team will get the ball first? Basic physics of distance, time, velocity tell us that it will always be the pair standing 10 feet apart. Well this is basically how your thoughts work. Through repetition of thought, you’ve brought the pair of synapses that represent your proclivities closer and closer together, and when the moment arises for you to form a thought ( and thus throw our metaphorical ball of electric energy), the thought that wins is the one that has less distance to travel, the one that will create a bridge between synapses fastest.
3. Acceptance vs Regret, Drift vs Desire, Love Vs Fear.
In the time of my scholastic renaissance, this is where Eastern Philosophy came in and handed me a sort of Occam’s Razor of simplicity that I could use to strengthen my forming ideology.
It was simple, every time a moment came my way and brought with it a chance for reactive thought, my two choices were simple, regardless of the flavor you put on them: Love or Fear; Acceptance or Regret; Drift or Desire; Optimism or Pessimism.
And now, my friends, we have our two pairs playing catch.
Naturally, for my own well-being, I realized that all I wanted to do was move the pair of lovers closer together so they would always beat the fearful, pessimistic pair. And so I began to implement a practice into my life of loving everything that came my way, accepting it while relinquishing the need for control. The Buddhists say that the universe is suffering, and I believe this is because the universe is chaos, and thus by its very nature out of our control. When we try to force desires, we are bound to find innumerable occasions where the universe will not comply. And so I decided to stop desiring to the point of attachment. I started to practice the acceptance that Buddhists speak upon, to Drift in the Tao, to accept the natural flow with an optimistic love, to say to every moment that came my way, good or bad, “thank you for the experience and the lesson, and now bring on the next moment so I can give it the same love.” Over and over I did this, moving those synapses closer and closer together, to the point where any synapses in my brain associated with sadness, regret, pessimism, fear, desire, melancholy, depression, etc had a smaller and smaller chance of triggering before the synapses of love gave me my reaction, my thoughts, my personality. And so my default state become one of optimism and appreciation, and the illusory burdens I attached to this existence lessened.
Now, as I pointed out, nature appreciates chaos, and our brain is no different. And so it’s important that I point out that this obviously is not a fool proof practice that will completely eradicate negativity from your consciousness; sometimes emotion weighs too heavy and sometimes the pair that catches the chemical charge will be the negative one; but, like any muscle, if you exercise those loving synapses enough, you will find yourself in possession of a new innate strength that will make the world shine more beautifully far more frequently. You will also find yourself being far more happy because of better health–which I’ll get to in just a moment, but hold on, because we’ve got one more point to discuss beforehand.
4. Mirror-Neurons.
So if your mind hadn’t already exploded when you learned you could alter reality with your thoughts, you may want to get ready for it. Because guess what? It’s not just your thoughts that can alter your brain and shift those synapses; the thoughts of those around you can do it as well.
If there’s any ability that truly separates us from our primate ancestors, it’s that of imagination. It’s the root of all art and architecture, of the (fictional) stories that formed religions that now control the lives of billions—even to the point of war over which fairytale is the “right one.”
That human failing aside, imagination lets us live in the past and in the future, and by escaping the present moment we can use our memories of the past to predict what will happen in the future; ie: I know from past experience that fire burns skin, so I know inside my minds-eye that if I stick my hand into a fire I will lose my flesh. This is so instinctual we don’t even recognize it’s constantly happening with every symbol that we’re perceiving in our day-to-day moments. But it is this ability that allows us to navigate the complexity of our society. Even more exciting is the fact that this skill also works with emotions, not just situations.
The premise, again, is quite simple: When we see someone experiencing an emotion ( be it anger, sadness, happiness, etc), our brain “tries out” that same emotion to imagine what the other person is going through. And it does this by attempting to fire the same synapses in your own brain so that you can attempt to relate to the emotion you’re observing. This is basically empathy. It is how we get the mob mentality, where a calm person can suddenly find themselves picking up a pitchfork against a common enemy once they’re influenced by dozens of angry minds. It is our shared bliss at music festivals, or our solidarity in sadness during tragedies.
But it is also your night at the bar with your friends who love love love to constantly bitch, whether it’s about their job, the man, the government, or about their other so-called friend’s short-comings, or whatever little thing they can pick apart in order to lift themselves up and give themselves some holier-than-thou sense of validation when you nod your head in acquiescence, agreeing like a robot afraid of free-thought : “Totally, man. It’s bullshit.”
But it’s not bullshit. It’s life, it’s chaos, and as you continually surround yourself with this attitude, you are continually trying out this attitude by firing the synapses in your brain. And as I explained above, every time you fire these synapses, you’re reshaping your brain. This is why it is so important to spend time with people who lift you up, because your friends are moving those fearful, cynical, pessimistic synapses closer together, making your default, short-path-personality as jaded and bitter as your peers. Want to be happy? Surround yourself with happy people who rewire your brain towards love, not towards fear of being invalidated. [[EDIT 11/8/15 : I’m NOT saying don’t be there for friends who are having a hard time and need an ear or who need to work through a difficult situation. Nor am I saying you can’t be critical about the failings and injustices in the world. Positive change usually requires critical thought.]]
5. Stress will kill you.
You see, the thing about all this negativity, of regretting, of attachment to desires, of pointless complaining about impermanent things that will always continue to pass in an existence where time moves forward—the thing is: it all causes stress. When your brain is firing off these synapses of anger, you’re weakening your immune system; you’re raising your blood pressure, increasing your risk of heart disease, obesity and diabetes, and a plethora of other negative ailments–as psychologytoday points out below.
"The stress hormone, cortisol, is public health enemy number one. Scientists have known for years that elevated cortisol levels: interfere with learning and memory, lower immune function and bone density, increase weight gain, blood pressure, cholesterol, heart disease… The list goes on and on.Chronic stress and elevated cortisol levels also increase risk for depression, mental illness, and lower life expectancy. This week, two separate studies were published in Science linking elevated cortisol levels as a potential trigger for mental illness and decreased resilience—especially in adolescence.Cortisol is released in response to fear or stress by the adrenal glands as part of the fight-or-flight mechanism."
The universe is chaotic, from unpreventable superstorms of wind and rain, to unpredictable car accidents or to the capricious whims of our peers whose personal truths even have the ability to emotionally damage or physically hurt others. And every moment holds the potential to bring you any one of these things, any shade along the gradient of spirit-soaring bliss and soul-crushing grief.
But regardless of what it brings your way, your choice is simple: Love or Fear. And yes, I understand it’s hard to find happiness on those nights when you feel like you’re all alone in the world, when a loved one passes, when you fail that test or get fired from that job; But when these moments come, you do not have to live in regret of them, you don’t have to give them constant negative attention and allow them to reshape your brain to the point that you become a bitter, jaded, cynical old curmudgeon that no longer notices that the very fact that they’re alive means they get to play blissfully in this cosmic playground where you get the godlike power of choice.
What you can do is say; “Yes, this sucks. But what’s the lesson? What can I take away from this to make me a better person? How can I take strength from this and use it to bring me closer to happiness in my next moment?” You see, a failed relationship or a bad day doesn’t have to be a pinion to your wings, it can be an updraft that showcases to you what things you like and don’t like, it can show you the red flags so that you can avoid them. If there was a personality your ex-partner had that drove you insane, then you now have the gift of knowing you don’t want to waste your time with another partner who acts the same way.
If you are mindful to the lessons of the failures, there is no reason that you can’t make the default of every day better than the one before it. Do something new everyday, learn its lesson, choose love over fear, and make every day better than the last. The more you do this, the more you will see and appreciate the beauty of this existence, and the happier you’ll be.
Sometimes in life, all the experience and knowledge simmering around in that ol’ consciousness of ours combines itself in a way that suddenly causes the cerebral clockwork to click into place, and in this fluid flow of thought we find an epiphany rising to the surface.
One such point for me came in my junior year at University. It changed the way I viewed the world forever as it catapulted me out of the last of my angsty, melancholic youth and onto a path of ever-increasing bliss. Sounds like I’m verging on feeding you some new-agey, mumbo-jumbo, doesn’t it? Well, bear with me, because I assure you the point here is to add some logical evidence to the ol’ cliches, to give you what I would consider my Science of Happiness.
At the time of this personal discovery, I was pursuing a double-major in Computer Science and Psychology. Aside from these declared interest, I also had an affinity for (Eastern) Philosophy and Neuroscience. This led to semester course load comprising of two 300-level psychology courses, one 300-level philosophy course, and a graduate-level artificial intelligence course for both biology and computer science majors. This amalgamation of studies quickly tore my brain into a dozen directions, and when I put the pieces back together, I found myself resolute with rational reasons for optimism and for removing from my life the people who liked to complain.
1. “Synapses that fire together wire together.”
This was the first phrase my AI professor told the classroom, and to this day it is still one of the most profound bits of logic I hold onto in order to dictate the decisions of my life. The principle is simple: Throughout your brain there is a collection of synapses separated by empty space called the synaptic cleft. Whenever you have a thought, one synapse shoots a chemical across the cleft to another synapse, thus building a bridge over which an electric signal can cross, carrying along its charge the relevant information you’re thinking about. It’s very similar to how nerves carry electric from the sensation in your toe all the way up to your brain where it’s actually “felt”.
Here’s the kicker: Every time this electrical charge is triggered, the synapses grow closer together in order to decrease the distance the electrical charge has to cross. This is a microcosmic example of evolution, of adaptation. The brain is rewiring its own circuitry, physically changing itself, to make it easier and more likely that the proper synapses will share the chemical link and thus spark together–in essence, making it easier for the thought to trigger. Therefore, your first mystical scientific evidence: your thoughts reshape your brain, and thus are changing a physical construct of reality. Let that sink in for a moment before you continue, because that’s a seriously profound logic-bomb right there.
Your thoughts reshape your brain, and thus are changing a physical construct of reality.
Okay, pull yourself together, cause we’re not done yet.
2. Shortest Path Wins the Race.
Beyond the absolutely incredible fact that your brain is always doing this, consistently shifting and morphing with every thought, even more exciting is the fact that the synapses you’ve most strongly bonded together (by thinking about more frequently) come to represent your default personality: your intelligence, skills, aptitudes, and most easily accessible thoughts(which are more-or-less the source of your conversation skills).
Let’s dig deeper into the logic behind that. Consider you have two pairs of people throwing a ball back and forth. One pair stands ten feet apart, the other at a distance of 100 feet. One partner from each team throws their ball to their respective partners at the exact same moment with the exact same speed. The first team that catches the ball gets to dictate your personal decision and mental state of mind.
So which team will get the ball first? Basic physics of distance, time, velocity tell us that it will always be the pair standing 10 feet apart. Well this is basically how your thoughts work. Through repetition of thought, you’ve brought the pair of synapses that represent your proclivities closer and closer together, and when the moment arises for you to form a thought ( and thus throw our metaphorical ball of electric energy), the thought that wins is the one that has less distance to travel, the one that will create a bridge between synapses fastest.
3. Acceptance vs Regret, Drift vs Desire, Love Vs Fear.
In the time of my scholastic renaissance, this is where Eastern Philosophy came in and handed me a sort of Occam’s Razor of simplicity that I could use to strengthen my forming ideology.
It was simple, every time a moment came my way and brought with it a chance for reactive thought, my two choices were simple, regardless of the flavor you put on them: Love or Fear; Acceptance or Regret; Drift or Desire; Optimism or Pessimism.
And now, my friends, we have our two pairs playing catch.
Naturally, for my own well-being, I realized that all I wanted to do was move the pair of lovers closer together so they would always beat the fearful, pessimistic pair. And so I began to implement a practice into my life of loving everything that came my way, accepting it while relinquishing the need for control. The Buddhists say that the universe is suffering, and I believe this is because the universe is chaos, and thus by its very nature out of our control. When we try to force desires, we are bound to find innumerable occasions where the universe will not comply. And so I decided to stop desiring to the point of attachment. I started to practice the acceptance that Buddhists speak upon, to Drift in the Tao, to accept the natural flow with an optimistic love, to say to every moment that came my way, good or bad, “thank you for the experience and the lesson, and now bring on the next moment so I can give it the same love.” Over and over I did this, moving those synapses closer and closer together, to the point where any synapses in my brain associated with sadness, regret, pessimism, fear, desire, melancholy, depression, etc had a smaller and smaller chance of triggering before the synapses of love gave me my reaction, my thoughts, my personality. And so my default state become one of optimism and appreciation, and the illusory burdens I attached to this existence lessened.
Now, as I pointed out, nature appreciates chaos, and our brain is no different. And so it’s important that I point out that this obviously is not a fool proof practice that will completely eradicate negativity from your consciousness; sometimes emotion weighs too heavy and sometimes the pair that catches the chemical charge will be the negative one; but, like any muscle, if you exercise those loving synapses enough, you will find yourself in possession of a new innate strength that will make the world shine more beautifully far more frequently. You will also find yourself being far more happy because of better health–which I’ll get to in just a moment, but hold on, because we’ve got one more point to discuss beforehand.
4. Mirror-Neurons.
So if your mind hadn’t already exploded when you learned you could alter reality with your thoughts, you may want to get ready for it. Because guess what? It’s not just your thoughts that can alter your brain and shift those synapses; the thoughts of those around you can do it as well.
If there’s any ability that truly separates us from our primate ancestors, it’s that of imagination. It’s the root of all art and architecture, of the (fictional) stories that formed religions that now control the lives of billions—even to the point of war over which fairytale is the “right one.”
That human failing aside, imagination lets us live in the past and in the future, and by escaping the present moment we can use our memories of the past to predict what will happen in the future; ie: I know from past experience that fire burns skin, so I know inside my minds-eye that if I stick my hand into a fire I will lose my flesh. This is so instinctual we don’t even recognize it’s constantly happening with every symbol that we’re perceiving in our day-to-day moments. But it is this ability that allows us to navigate the complexity of our society. Even more exciting is the fact that this skill also works with emotions, not just situations.
The premise, again, is quite simple: When we see someone experiencing an emotion ( be it anger, sadness, happiness, etc), our brain “tries out” that same emotion to imagine what the other person is going through. And it does this by attempting to fire the same synapses in your own brain so that you can attempt to relate to the emotion you’re observing. This is basically empathy. It is how we get the mob mentality, where a calm person can suddenly find themselves picking up a pitchfork against a common enemy once they’re influenced by dozens of angry minds. It is our shared bliss at music festivals, or our solidarity in sadness during tragedies.
But it is also your night at the bar with your friends who love love love to constantly bitch, whether it’s about their job, the man, the government, or about their other so-called friend’s short-comings, or whatever little thing they can pick apart in order to lift themselves up and give themselves some holier-than-thou sense of validation when you nod your head in acquiescence, agreeing like a robot afraid of free-thought : “Totally, man. It’s bullshit.”
But it’s not bullshit. It’s life, it’s chaos, and as you continually surround yourself with this attitude, you are continually trying out this attitude by firing the synapses in your brain. And as I explained above, every time you fire these synapses, you’re reshaping your brain. This is why it is so important to spend time with people who lift you up, because your friends are moving those fearful, cynical, pessimistic synapses closer together, making your default, short-path-personality as jaded and bitter as your peers. Want to be happy? Surround yourself with happy people who rewire your brain towards love, not towards fear of being invalidated. [[EDIT 11/8/15 : I’m NOT saying don’t be there for friends who are having a hard time and need an ear or who need to work through a difficult situation. Nor am I saying you can’t be critical about the failings and injustices in the world. Positive change usually requires critical thought.]]
5. Stress will kill you.
You see, the thing about all this negativity, of regretting, of attachment to desires, of pointless complaining about impermanent things that will always continue to pass in an existence where time moves forward—the thing is: it all causes stress. When your brain is firing off these synapses of anger, you’re weakening your immune system; you’re raising your blood pressure, increasing your risk of heart disease, obesity and diabetes, and a plethora of other negative ailments–as psychologytoday points out below.
"The stress hormone, cortisol, is public health enemy number one. Scientists have known for years that elevated cortisol levels: interfere with learning and memory, lower immune function and bone density, increase weight gain, blood pressure, cholesterol, heart disease… The list goes on and on.Chronic stress and elevated cortisol levels also increase risk for depression, mental illness, and lower life expectancy. This week, two separate studies were published in Science linking elevated cortisol levels as a potential trigger for mental illness and decreased resilience—especially in adolescence.Cortisol is released in response to fear or stress by the adrenal glands as part of the fight-or-flight mechanism."
The universe is chaotic, from unpreventable superstorms of wind and rain, to unpredictable car accidents or to the capricious whims of our peers whose personal truths even have the ability to emotionally damage or physically hurt others. And every moment holds the potential to bring you any one of these things, any shade along the gradient of spirit-soaring bliss and soul-crushing grief.
But regardless of what it brings your way, your choice is simple: Love or Fear. And yes, I understand it’s hard to find happiness on those nights when you feel like you’re all alone in the world, when a loved one passes, when you fail that test or get fired from that job; But when these moments come, you do not have to live in regret of them, you don’t have to give them constant negative attention and allow them to reshape your brain to the point that you become a bitter, jaded, cynical old curmudgeon that no longer notices that the very fact that they’re alive means they get to play blissfully in this cosmic playground where you get the godlike power of choice.
What you can do is say; “Yes, this sucks. But what’s the lesson? What can I take away from this to make me a better person? How can I take strength from this and use it to bring me closer to happiness in my next moment?” You see, a failed relationship or a bad day doesn’t have to be a pinion to your wings, it can be an updraft that showcases to you what things you like and don’t like, it can show you the red flags so that you can avoid them. If there was a personality your ex-partner had that drove you insane, then you now have the gift of knowing you don’t want to waste your time with another partner who acts the same way.
If you are mindful to the lessons of the failures, there is no reason that you can’t make the default of every day better than the one before it. Do something new everyday, learn its lesson, choose love over fear, and make every day better than the last. The more you do this, the more you will see and appreciate the beauty of this existence, and the happier you’ll be.
Labels:
consciousness,
mind,
philosophy,
psychology,
self improvement
Thursday, October 29, 2015
Towards 0: Cognitive Dissonance and the Revolution
A breakdown of life, the universe, and everything, this document is my attempt at a 'manifesto' for the 21st century. It covers such topics as cognitive dissonance, non-aristotelianism, critical rationalism, politics, religion, ufos, consciousness, ontology, and synchronicity in a concise 60-some pages. It's an amateur work but I'm proud of it. As of this date, this can pretty much be considered a summary of everything I know for sure... click on the link below to access the pdf.
https://drive.google.com/file/d/0B1JMPTOXDYaLV2xtX2RRdXR3Vk0/view?usp=sharing
https://drive.google.com/file/d/0B1JMPTOXDYaLV2xtX2RRdXR3Vk0/view?usp=sharing
Labels:
cognitive dissonance,
consciousness,
critical rationalism,
manifesto,
non-aristotelianism,
politics,
religion,
revolution,
synchronicity,
UFOs
Friday, August 21, 2015
Physicists Say Consciousness Might Be a State of Matter
via PBS.org
It’s not enough to have a brain. Consciousness—a hallmark of humans, mammals, birds, and even octopuses—is that mysterious force that makes all those neurons and synapses “tick” and merge into “you.” It’s what makes you alert and sensitive to your surroundings, and it’s what helps you see yourself as separate from everything else. But neuroscientists still don’t know what consciousness is, or how it’s even possible.
So MIT’s Max Tegmark is championing a new way of explaining it: he believes that consciousness is a state of matter.
By “matter,” he doesn’t mean that somewhere in the deep recesses of your brain is a small bundle of liquid, sloshing around and powering your sense of self and your awareness of the world. Instead, Tegmark suggests that consciousness arises out of a particular set of mathematical conditions, and there are varying degrees of consciousness—just as certain conditions are required to create varying states of vapor, water, and ice. In turn, understanding how consciousness functions as a separate state of matter could help us come to a more thorough understanding of why we perceive the world the way we do.
Most neuroscientists agonize over consciousness because it’s so difficult to explain. In recent years, though, they’ve tended to agree that a conscious entity must be able to store information, retrieve it efficiently, process it, and exist as a unified whole—that is, you can’t break consciousness down into smaller parts. These traits are calculable, Tegmark says. A case in point? We put labels on the strength of our current computer processing power. While they’re not human, some of our computers can operate independently, and we can use our knowledge of artificial intelligence to push these machines to new limits.
Tegmark calls his new state of matter “perceptronium.” From the Physics arXiv Blog on Medium:
Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.
So if consciousness is a state of matter, he concludes, we might be able to apply what we know about consciousness to what we actually see:
...the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?
In other words, quantum mechanics dictates that the world we see is just one of an infinite number of possibilities. But why? Tegmark doesn’t have an answer, but his ideas demonstrate that there might be a more dynamic relationship between consciousness and other states of matter—that our ability to perceive the world is both a means to an end and also an end (an “object”) in itself.
It’s not enough to have a brain. Consciousness—a hallmark of humans, mammals, birds, and even octopuses—is that mysterious force that makes all those neurons and synapses “tick” and merge into “you.” It’s what makes you alert and sensitive to your surroundings, and it’s what helps you see yourself as separate from everything else. But neuroscientists still don’t know what consciousness is, or how it’s even possible.
So MIT’s Max Tegmark is championing a new way of explaining it: he believes that consciousness is a state of matter.
By “matter,” he doesn’t mean that somewhere in the deep recesses of your brain is a small bundle of liquid, sloshing around and powering your sense of self and your awareness of the world. Instead, Tegmark suggests that consciousness arises out of a particular set of mathematical conditions, and there are varying degrees of consciousness—just as certain conditions are required to create varying states of vapor, water, and ice. In turn, understanding how consciousness functions as a separate state of matter could help us come to a more thorough understanding of why we perceive the world the way we do.
Most neuroscientists agonize over consciousness because it’s so difficult to explain. In recent years, though, they’ve tended to agree that a conscious entity must be able to store information, retrieve it efficiently, process it, and exist as a unified whole—that is, you can’t break consciousness down into smaller parts. These traits are calculable, Tegmark says. A case in point? We put labels on the strength of our current computer processing power. While they’re not human, some of our computers can operate independently, and we can use our knowledge of artificial intelligence to push these machines to new limits.
Tegmark calls his new state of matter “perceptronium.” From the Physics arXiv Blog on Medium:
Tegmark discusses perceptronium, defined as the most general substance that feels subjectively self-aware. This substance should not only be able to store and process information but in a way that forms a unified, indivisible whole. That also requires a certain amount of independence in which the information dynamics is determined from within rather than externally.
So if consciousness is a state of matter, he concludes, we might be able to apply what we know about consciousness to what we actually see:
...the problem is why we perceive the universe as the semi-classical, three dimensional world that is so familiar. When we look at a glass of iced water, we perceive the liquid and the solid ice cubes as independent things even though they are intimately linked as part of the same system. How does this happen? Out of all possible outcomes, why do we perceive this solution?
In other words, quantum mechanics dictates that the world we see is just one of an infinite number of possibilities. But why? Tegmark doesn’t have an answer, but his ideas demonstrate that there might be a more dynamic relationship between consciousness and other states of matter—that our ability to perceive the world is both a means to an end and also an end (an “object”) in itself.
Tuesday, August 18, 2015
Consciousness after clinical death. The biggest ever scientific study published
via Bioethics research library of Georgetown University
Southampton University scientists have found evidence that awareness continue for at least several minutes after clinical death which was previously thought impossible.
A recent article in British newspaper The Daily Mail (1) featured an interview with Dr. Sam Parnia, with the lead “Consciousness may continue even after death, scientists now believe”. Sam Parnia is head of a multidisciplinary team at Southampton University (United Kingdom) who published a study in the Oficial Journal of European Resuscitation Council, with the title “AWARE—AWAreness during REsuscitation—A prospective study” (DOI: http://dx.doi.org/10.1016/j.resuscitation.2014.09.004) (3) which included more than 2,000 persons who suffered a cardiac arrest and successfully responded to resuscitation treatment, in 15 hospitals in the United Kingdom, United States and Austria. This is the largest study of its kind to date, using rigorous methodology, in order to exclude all those cases that could be based on individual impressions that are worthy, but which hold no scientific interest.
Jerry Nolan, Editor-in-Chief at Reuscitation Journal, who did not participate in the study but is considered an authority on the subject, said of the research, “Dr. Parnia and his colleagues are to be congratulated on the completion of a fascinating study that will open the door to more extensive research into what happens when we die.” (2)
The study director continued, saying that, “A total of 2060 cardiac arrest patients were studied. Of that number, 330 survived and 140 said that they had been partly aware at the time of resuscitating”. Of these latter, states Parnia, “thirty-nine per cent […] described a perception of awareness, but did not have any explicit memory of events”, which suggests, according to Dr. Parnia, that “more people may have mental activity initially but then lose their memories, either due to the effects of brain injury or sedative drugs on memory recall”.
He continued, saying: ” One in five said they had felt an unusual sense of peacefulness while nearly one third said time had slowed down or speeded up. Some recalled seeing a bright light; a golden flash or the Sun shining. Others recounted feelings of fear or drowning or being dragged through deep water. 13 per cent said they had felt separated from their bodies and the same number said their sensed had been heightened.”
Parnia believes that, “contrary to perception, death is not a specific moment, but a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs and brain to cease functioning.”
Finally, we cite the opinion of David Wilde, a psychologist at Nottingham Trent University (United Kingdom), who is currently compiling data on out-of-body experiences, in an attempt to define a pattern which links each episode, says “Most studies look retrospectively, 10 or 20 years ago, but the researchers went out looking for examples and used a really large sample size, so this gives the work a lot of validity. “There is some very good evidence here that these experiences are actually happening after people have medically died. We just don’t know what is going on. We are still very much in the dark about what happens when you die and hopefully this study will help shine a scientific lens onto that.” (The Thelegraph, October 7 2014)
In our opinion, the study led by Parnia merits special attention, because of its scientific rigor and the prudence of its conclusions, which are supported by scientifically proven facts. We hope that this study can also be extended to those who have been diagnosed as brain dead and have come back to life.
1. Daily mail (7 October, 2014).
2. The Telegraph UK (07 October, 2014).
3. Medical Journal Resuscitation, “AWARE—AWAreness during REsuscitation—A prospective study”
La entrada Consciousness after clinical death. The biggest ever scientific study published aparece primero en Observatorio de Bioética, UCV.
Southampton University scientists have found evidence that awareness continue for at least several minutes after clinical death which was previously thought impossible.
A recent article in British newspaper The Daily Mail (1) featured an interview with Dr. Sam Parnia, with the lead “Consciousness may continue even after death, scientists now believe”. Sam Parnia is head of a multidisciplinary team at Southampton University (United Kingdom) who published a study in the Oficial Journal of European Resuscitation Council, with the title “AWARE—AWAreness during REsuscitation—A prospective study” (DOI: http://dx.doi.org/10.1016/j.resuscitation.2014.09.004) (3) which included more than 2,000 persons who suffered a cardiac arrest and successfully responded to resuscitation treatment, in 15 hospitals in the United Kingdom, United States and Austria. This is the largest study of its kind to date, using rigorous methodology, in order to exclude all those cases that could be based on individual impressions that are worthy, but which hold no scientific interest.
Jerry Nolan, Editor-in-Chief at Reuscitation Journal, who did not participate in the study but is considered an authority on the subject, said of the research, “Dr. Parnia and his colleagues are to be congratulated on the completion of a fascinating study that will open the door to more extensive research into what happens when we die.” (2)
Consciousness after clinical death: “Whether it fades away afterwards, we do not know”
The results revealed that 40% of those who survived a cardiac arrest were aware during the time that they were clinically dead and before their hearts were restarted. Dr. Parnia, in the interview stated: “The evidence thus far suggests that in the first few minutes after death, consciousness is not annihilated. Whether it fades away afterwards, we do not know, but right after death, consciousness is not lost. We know the brain can’t function when the heart has stopped beating. But in this case conscious awareness appears to have continued for up to three minutes into the period when the heart wasn’t beating, even though the brain typically shuts down within 20-30 seconds after the heart has stopped. This is significant, since it has often been assumed that experiences in relation to death are likely hallucinations or illusions, occurring either before the heart stops or after the heart has been successfully restarted. but not an experience corresponding with ‘real’ events when the heart isn’t beating. Furthermore, the detailed recollections of visual awareness in this case were consistent with verified events”.
The study director continued, saying that, “A total of 2060 cardiac arrest patients were studied. Of that number, 330 survived and 140 said that they had been partly aware at the time of resuscitating”. Of these latter, states Parnia, “thirty-nine per cent […] described a perception of awareness, but did not have any explicit memory of events”, which suggests, according to Dr. Parnia, that “more people may have mental activity initially but then lose their memories, either due to the effects of brain injury or sedative drugs on memory recall”.
He continued, saying: ” One in five said they had felt an unusual sense of peacefulness while nearly one third said time had slowed down or speeded up. Some recalled seeing a bright light; a golden flash or the Sun shining. Others recounted feelings of fear or drowning or being dragged through deep water. 13 per cent said they had felt separated from their bodies and the same number said their sensed had been heightened.”
Parnia believes that, “contrary to perception, death is not a specific moment, but a potentially reversible process that occurs after any severe illness or accident causes the heart, lungs and brain to cease functioning.”
Exploring objectively what happens when we die
The study director said, “In this study we wanted to go beyond the emotionally charged yet poorly defined term of “near death experiences” to explore objectively what happens when we die. While it was not possible to absolutely prove the reality or meaning of patients’ experiences and claims of awareness, (due to the very low incidence – two percent – of explicit recall of visual awareness or so called out of body experiences), it was impossible to disclaim them either and more work is needed in this area”.
Finally, we cite the opinion of David Wilde, a psychologist at Nottingham Trent University (United Kingdom), who is currently compiling data on out-of-body experiences, in an attempt to define a pattern which links each episode, says “Most studies look retrospectively, 10 or 20 years ago, but the researchers went out looking for examples and used a really large sample size, so this gives the work a lot of validity. “There is some very good evidence here that these experiences are actually happening after people have medically died. We just don’t know what is going on. We are still very much in the dark about what happens when you die and hopefully this study will help shine a scientific lens onto that.” (The Thelegraph, October 7 2014)
In our opinion, the study led by Parnia merits special attention, because of its scientific rigor and the prudence of its conclusions, which are supported by scientifically proven facts. We hope that this study can also be extended to those who have been diagnosed as brain dead and have come back to life.
1. Daily mail (7 October, 2014).
2. The Telegraph UK (07 October, 2014).
3. Medical Journal Resuscitation, “AWARE—AWAreness during REsuscitation—A prospective study”
La entrada Consciousness after clinical death. The biggest ever scientific study published aparece primero en Observatorio de Bioética, UCV.
Labels:
bioethics,
consciousness,
death,
sam parnia,
study
Subscribe to:
Posts (Atom)