by Brian Resnick on Vox
Why it’s so hard to see our own ignorance, and what to do about it.
Julia Rohrer wants to create a radical new culture for social scientists. A personality psychologist at the Max Planck Institute for Human Development, Rohrer is trying to get her peers to publicly, willingly admit it when they are wrong.
To do this, she, along with some colleagues, started up something called the Loss of Confidence Project. It’s designed to be an academic safe space for researchers to declare for all to see that they no longer believe in the accuracy of one of their previous findings. The effort recently yielded a paper that includes six admissions of no confidence. And it’s accepting submissions until January 31.
“I do think it’s a cultural issue that people are not willing to admit mistakes,” Rohrer says. “Our broader goal is to gently nudge the whole scientific system and psychology toward a different culture,” where it’s okay, normalized, and expected for researchers to admit past mistakes and not get penalized for it.
The project is timely because a large number of scientific findings have been disproven, or become more doubtful, in recent years. One high-profile effort to retest 100 psychological experiments found only 40 percent replicated with more rigorous methods. It’s been a painful period for social scientists, who’ve had to deal with failed replications of classic studies and realize their research practices are often weak.
It’s been fascinating to watch scientists struggle to make their institutions more humble. And I believe there’s an important and underappreciated virtue embedded in this process.
For the past few months, I’ve been talking to many scholars about intellectual humility, the characteristic that allows for admission of wrongness.
I’ve come to appreciate what a crucial tool it is for learning, especially in an increasingly interconnected and complicated world. As technology makes it easier to lie and spread false information incredibly quickly, we need intellectually humble, curious people.
I’ve also realized how difficult it is to foster intellectual humility. In my reporting on this, I’ve learned there are three main challenges on the path to humility:
1. In order for us to acquire more intellectual humility, we all, even the smartest among us, need to better appreciate our cognitive blind spots. Our minds are more imperfect and imprecise than we’d often like to admit. Our ignorance can be invisible.
2. Even when we overcome that immense challenge and figure out our errors, we need to remember we won’t necessarily be punished for saying, “I was wrong.” And we need to be braver about saying it. We need a culture that celebrates those words.
3. We’ll never achieve perfect intellectual humility. So we need to choose our convictions thoughtfully.
This is all to say: Intellectual humility isn’t easy. But damn, it’s a virtue worth striving for, and failing for, in this new year.
Intellectual humility is simply “the recognition that the things you believe in might in fact be wrong,” as Mark Leary, a social and personality psychologist at Duke University, tells me.
But don’t confuse it with overall humility or bashfulness. It’s not about being a pushover; it’s not about lacking confidence, or self-esteem. The intellectually humble don’t cave every time their thoughts are challenged.
Instead, it’s a method of thinking. It’s about entertaining the possibility that you may be wrong and being open to learning from the experience of others. Intellectual humility is about being actively curious about your blind spots. One illustration is in the ideal of the scientific method, where a scientist actively works against her own hypothesis, attempting to rule out any other alternative explanations for a phenomenon before settling on a conclusion. It’s about asking: What am I missing here?
It doesn’t require a high IQ or a particular skill set. It does, however, require making a habit of thinking about your limits, which can be painful. “It’s a process of monitoring your own confidence,” Leary says.
This idea is older than social psychology. Philosophers from the earliest days have grappled with the limits of human knowledge. Michel de Montaigne, the 16th-century French philosopher credited with inventing the essay, wrote that “the plague of man is boasting of his knowledge.”
Social psychologists have learned that humility is associated with other valuable character traits: People who score higher on intellectual humility questionnaires are more open to hearing opposing views. They more readily seek out information that conflicts with their worldview. They pay more attention to evidence and have a stronger self-awareness when they answer a question incorrectly.
When you ask the intellectually arrogant if they’ve heard of bogus historical events like “Hamrick’s Rebellion,” they’ll say, “Sure.” The intellectually humble are less likely to do so. Studies have found that cognitive reflection — i.e., analytic thinking — is correlated with being better able to discern fake news stories from real ones. These studies haven’t looked at intellectual humility per se, but it’s plausible there’s an overlap.
Most important of all, the intellectually humble are more likely to admit it when they are wrong. When we admit we’re wrong, we can grow closer to the truth.
One reason I’ve been thinking about the virtue of humility recently is because our president, Donald Trump, is one of the least humble people on the planet.
It was Trump who said on the night of his nomination, “I alone can fix it,” with the “it” being our entire political system. It was Trump who once said, “I have one of the great memories of all time.” More recently, Trump told the Associated Press, “I have a natural instinct for science,” in dodging a question on climate change.
A frustration I feel about Trump and the era of history he represents is that his pride and his success — he is among the most powerful people on earth — seem to be related. He exemplifies how our society rewards confidence and bluster, not truthfulness.
Yet we’ve also seen some very high-profile examples lately of how overconfident leadership can be ruinous for companies. Look at what happened to Theranos, a company that promised to change the way blood samples are drawn. It was all hype, all bluster, and it collapsed. Or consider Enron’s overconfident executives, who were often hailed for their intellectual brilliance — they ran the company into the ground with risky, suspect financial decisions.
The problem with arrogance is that the truth always catches up. Trump may be president and confident in his denials of climate change, but the changes to our environment will still ruin so many things in the future.
Why it’s so hard to see our blind spots: “Our ignorance is invisible to us”
As I’ve been reading the psychological research on intellectual humility and the character traits it correlates with, I can’t help but fume: Why can’t more people be like this?
We need more intellectual humility for two reasons. One is that our culture promotes and rewards overconfidence and arrogance (think Trump and Theranos, or the advice your career counselor gave you when going into job interviews). At the same time, when we are wrong — out of ignorance or error — and realize it, our culture doesn’t make it easy to admit it. Humbling moments too easily can turn into moments of humiliation.
So how can we promote intellectual humility for both of these conditions?
In asking that question of researchers and scholars, I’ve learned to appreciate how hard a challenge it is to foster intellectual humility.
First off, I think it’s helpful to remember how flawed the human brain can be and how prone we all are to intellectual blind spots. When you learn about how the brain actually works, how it actually perceives the world, it’s hard not to be a bit horrified, and a bit humbled.
We often can’t see — or even sense — what we don’t know. It helps to realize that it’s normal and human to be wrong.
It’s rare that a viral meme also provides a surprisingly deep lesson on the imperfect nature of the human mind. But believe it or not, the great “Yanny or Laurel” debate of 2018 fits the bill.
For the very few of you who didn’t catch it — I hope you’re recovering nicely from that coma — here’s what happened.
An audio clip says the name “Laurel” in a robotic voice. Or does it? Some people hear the clip and immediately hear “Yanny.” And both sets of people — Team Yanny and Team Laurel — are indeed hearing the same thing.
Hearing, the perception of sound, ought to be a simple thing for our brains to do. That so many people can listen to the same clip and hear such different things should give us humbling pause. Hearing “Yanny” or “Laurel” in any given moment ultimately depends on a whole host of factors: the quality of the speakers you’re using, whether you have hearing loss, your expectations.
Here’s the deep lesson to draw from all of this: Much as we might tell ourselves our experience of the world is the truth, our reality will always be an interpretation. Light enters our eyes, sound waves enter our ears, chemicals waft into our noses, and it’s up to our brains to make a guess about what it all is.
Perceptual tricks like this (“the dress” is another one) reveal that our perceptions are not the absolute truth, that the physical phenomena of the universe are indifferent to whether our feeble sensory organs can perceive them correctly. We’re just guessing. Yet these phenomena leave us indignant: How could it be that our perception of the world isn’t the only one?
That sense of indignation is called naive realism: the feeling that our perception of the world is the truth. “I think we sometimes confuse effortlessness with accuracy,” Chris Chabris, a psychological researcher who co-authored a book on the challenges of human perception, tells me. When something is so immediate and effortless to us — hearing the sound of “Yanny” — it just feels true. (Similarly, psychologists find when a lie is repeated, it’s more likely to be misremembered as being true, and for a similar reason: When you’re hearing something for the second or third time, your brain becomes faster to respond to it. And that fluency is confused with truth.)
Our interpretations of reality are often arbitrary, but we’re still stubborn about them. Nonetheless, the same observations can lead to wildly different conclusions.
For every sense and every component of human judgment, there are illusions and ambiguities we interpret arbitrarily.
Some are gravely serious. White people often perceive black men to be bigger, taller, and more muscular (and therefore more threatening) than they really are. That’s racial bias — but it’s also a socially constructed illusion. When we’re taught or learn to fear other people, our brains distort their potential threat. They seem more menacing, and we want to build walls around them. When we learn or are taught that other people are less than human, we’re less likely to look upon them kindly and more likely to be okay when violence is committed against them.
Not only are our interpretations of the world often arbitrary, but we’re often overconfident in them. “Our ignorance is invisible to us,” David Dunning, an expert on human blind spots, says.
You might recognize his name as half of the psychological phenomenon that bears his name: the Dunning-Kruger effect. That’s where people of low ability — let’s say, those who fail to understand logic puzzles — tend to unduly overestimate their abilities. Inexperience masquerades as expertise.
An irony of the Dunning-Kruger effect is that so many people misinterpret it, are overconfident in their understanding of it, and get it wrong.
When people talk or write about the Dunning-Kruger effect, it’s almost always in reference to other people. “The fact is this is a phenomenon that visits all of us sooner or later,” Dunning says. We’re all overconfident in our ignorance from time to time. (Perhaps related: Some 65 percent of Americans believe they’re more intelligent than average, which is wishful thinking.)
Similarly, we’re overconfident in our ability to remember. Human memory is extremely malleable, prone to small changes. When we remember, we don’t wind back our minds to a certain time and relive that exact moment, yet many of us think our memories work like a videotape.
Dunning hopes his work helps people understand that “not knowing the scope of your own ignorance is part of the human condition,” he says. “But the problem with it is we see it in other people, and we don’t see it in ourselves. The first rule of the Dunning-Kruger club is you don’t know you’re a member of the Dunning-Kruger club.”
In 2012, psychologist Will Gervais scored an honor any PhD science student would covet: a co-authored paper in the journal Science, one of the top interdisciplinary scientific journals in the world. Publishing in Science doesn’t just help a researcher rise up in academic circles; it often gets them a lot of media attention too.
One of the experiments in the paper tried to see if getting people to think more rationally would make them less willing to report religious beliefs. They had people look at a picture of Rodin’s The Thinker or another statue. They thought The Thinker would nudge people to think harder, more analytically. In this more rational frame of mind, then, the participants would be less likely to endorse believing in something as faith-based and invisible as religion, and that’s what the study found. It was catnip for science journalists: one small trick to change the way we think.
But it was a tiny, small-sample study, the exact type that is prone to yielding false positives. Several years later, another lab attempted to replicate the findings with a much larger sample size, and failed to find any evidence for the effect.
And while Gervais knew that the original study wasn’t rigorous, he couldn’t help but feel a twinge of discomfort.
“Intellectually, I could say the original data weren’t strong,” he says. “That’s very different from the human, personal reaction to it. Which is like, ‘Oh, shit, there’s going to be a published failure to replicate my most cited finding that’s gotten the most media attention.’ You start worrying about stuff like, ‘Are there going to be career repercussions? Are people going to think less of my other work and stuff I’ve done?’”
Gervais’s story is familiar: Many of us fear we’ll be seen as less competent, less trustworthy, if we admit wrongness. Even when we can see our own errors — which, as outlined above, is not easy to do — we’re hesitant to admit it.
But turns out this assumption is false. As Adam Fetterman, a social psychologist at the University of Texas El Paso, has found in a few studies, wrongness admission isn’t usually judged harshly. “When we do see someone admit that they are wrong, the wrongness admitter is seen as more communal, more friendly,” he says. It’s almost never the case, in his studies, “that when you admit you’re wrong, people think you are less competent.”
Sure, there might be some people who will troll you for your mistakes. There might be a mob on Twitter that converges in order to shame you. Some moments of humility could be humiliating. But this fear must be vanquished if we are to become less intellectually arrogant and more intellectually humble.
But even if you’re motivated to be more intellectually humble, our culture doesn’t always reward it.
The field of psychology, overall, has been reckoning with a “replication crisis” where many classic findings in the science don’t hold up under rigorous scrutiny. Incredibly influential textbook findings in psychology — like the “ego depletion” theory of willpower or the “marshmallow test” — have been bending or breaking.
I’ve found it fascinating to watch the field of psychology deal with this. For some researchers, the reckoning has been personally unsettling. “I’m in a dark place,” Michael Inzlicht, a University of Toronto psychologist, wrote in a 2016 blog post after seeing the theory of ego depletion crumble before his eyes. “Have I been chasing puffs of smoke for all these years?”
What I’ve learned from reporting on the “replication crisis” is that intellectual humility requires support from peers and institutions. And that environment is hard to build.
“What we teach undergrads is that scientists want to prove themselves wrong,” says Simine Vazire, a psychologist and journal editor who often writes and speaks about replication issues. “But, ‘How would I know if I was wrong?’ is actually a really, really hard question to answer. It involves things like having critics yell at you and telling you that you did things wrong and reanalyze your data.”
And that’s not fun. Again: Even among scientists — people who ought to question everything — intellectual humility is hard. In some cases, researchers have refused to concede their original conclusions despite the unveiling of new evidence. (One famous psychologist under fire recently told me angrily, “I will stand by that conclusion for the rest of my life, no matter what anyone says.”)
Psychologists are human. When they reach a conclusion, it becomes hard to see things another way. Plus, the incentives for a successful career in science push researchers to publish as many positive findings as possible.
There are two solutions — among many — to make psychological science more humble, and I think we can learn from them.
One is that humility needs to be built into the standard practices of the science. And that happens through transparency. It’s becoming more commonplace for scientists to preregister — i.e., commit to — a study design before even embarking on an experiment. That way, it’s harder for them to deviate from the plan and cherry-pick results. It also makes sure all data is open and accessible to anyone who wants to conduct a reanalysis.
That “sort of builds humility into the structure of the scientific enterprise,” Chabris says. “We’re not all-knowing and all-seeing and perfect at our jobs, so we put [the data] out there for other people to check out, to improve upon it, come up with new ideas from and so on.” To be more intellectually humble, we need to be more transparent about our knowledge. We need to show others what we know and what we don’t.
And two, there needs to be more celebration of failure, and a culture that accepts it. That includes building safe places for people to admit they were wrong, like the Loss of Confidence Project.
But it’s clear this cultural change won’t come easily.
“In the end,” Rohrer says, after getting a lot of positive feedback on the project, “we ended up with just a handful of statements.”
There’s a personal cost to an intellectually humble outlook. For me, at least, it’s anxiety.
When I open myself up to the vastness of my own ignorance, I can’t help but feel a sudden suffocating feeling. I have just one small mind, a tiny, leaky boat upon which to go exploring knowledge in a vast and knotty sea of which I carry no clear map.
Why is it that some people never seem to wrestle with those waters? That they stand on the shore, squint their eyes, and transform that sea into a puddle in their minds and then get awarded for their false certainty? “I don’t know if I can tell you that humility will get you farther than arrogance,” says Tenelle Porter, a University of California Davis psychologist who has studied intellectual humility.
Of course, following humility to an extreme end isn’t enough. You don’t need to be humble about your belief that the world is round. I just think more humility, sprinkled here and there, would be quite nice.
“It’s bad to think of problems like this like a Rubik’s cube: a puzzle that has a neat and satisfying solution that you can put on your desk,” says Michael Lynch, a University of Connecticut philosophy professor. Instead, it’s a problem “you can make progress at a moment in time, and make things better. And that we can do — that we can definitely do.”
For a democracy to flourish, Lynch argues, we need a balance between convictions — our firmly held beliefs — and humility. We need convictions, because “an apathetic electorate is no electorate at all,” he says. And we need humility because we need to listen to one another. Those two things will always be in tension.
The Trump presidency suggests there’s too much conviction and not enough humility in our current culture.
“The personal question, the existential question that faces you and I and every thinking human being, is, ‘How do you maintain an open mind toward others and yet, at the same time, keep your strong moral convictions?’” Lynch says. “That’s an issue for all of us.”
To be intellectually humble doesn’t mean giving up on the ideas we love and believe in. It just means we need to be thoughtful in choosing our convictions, be open to adjusting them, seek out their flaws, and never stop being curious about why we believe what we believe. Again, that’s not easy.
You might be thinking: “All the social science cited here about how intellectual humility is correlated with open-minded thinking — what if that’s all bunk?” To that, I’d say the research isn’t perfect. Those studies are based on self-reports, where it can be hard to trust that people really do know themselves or that they’re being totally honest. And we know that social science findings are often upended.
But I’m going to take it as a point of conviction that intellectual humility is a virtue. I’ll draw that line for myself. It’s my conviction.
Could I be wrong? Maybe. Just try to convince me otherwise.
Thursday, January 10, 2019
The Blind Spot
via aeon
It’s tempting to think science gives a God’s-eye view of reality. But we forget the place of human experience at our peril
The problem of time is one of the greatest puzzles of modern physics. The first bit of the conundrum is cosmological. To understand time, scientists talk about finding a ‘First Cause’ or ‘initial condition’ – a description of the Universe at the very beginning (or at ‘time equals zero’). But to determine a system’s initial condition, we need to know the total system. We need to make measurements of the positions and velocities of its constituent parts, such as particles, atoms, fields and so forth. This problem hits a hard wall when we deal with the origin of the Universe itself, because we have no view from the outside. We can’t step outside the box in order to look within, because the box is all there is. A First Cause is not only unknowable, but also scientifically unintelligible.
The second part of the challenge is philosophical. Scientists have taken physical time to be the only real time – whereas experiential time, the subjective sense of time’s passing, is considered a cognitive fabrication of secondary importance. The young Albert Einstein made this position clear in his debate with philosopher Henri Bergson in the 1920s, when he claimed that the physicist’s time is the only time. With age, Einstein became more circumspect. Up to the time of his death, he remained deeply troubled about how to find a place for the human experience of time in the scientific worldview.
These quandaries rest on the presumption that physical time, with an absolute starting point, is the only real kind of time. But what if the question of the beginning of time is ill-posed? Many of us like to think that science can give us a complete, objective description of cosmic history, distinct from us and our perception of it. But this image of science is deeply flawed. In our urge for knowledge and control, we’ve created a vision of science as a series of discoveries about how reality is in itself, a God’s-eye view of nature.
Such an approach not only distorts the truth, but creates a false sense of distance between ourselves and the world. That divide arises from what we call the Blind Spot, which science itself cannot see. In the Blind Spot sits experience: the sheer presence and immediacy of lived perception.
Behind the Blind Spot sits the belief that physical reality has absolute primacy in human knowledge, a view that can be called scientific materialism. In philosophical terms, it combines scientific objectivism (science tells us about the real, mind-independent world) and physicalism (science tells us that physical reality is all there is). Elementary particles, moments in time, genes, the brain – all these things are assumed to be fundamentally real. By contrast, experience, awareness and consciousness are taken to be secondary. The scientific task becomes about figuring out how to reduce them to something physical, such as the behaviour of neural networks, the architecture of computational systems, or some measure of information.
This framework faces two intractable problems. The first concerns scientific objectivism. We never encounter physical reality outside of our observations of it. Elementary particles, time, genes and the brain are manifest to us only through our measurements, models and manipulations. Their presence is always based on scientific investigations, which occur only in the field of our experience.
This doesn’t mean that scientific knowledge is arbitrary, or a mere projection of our own minds. On the contrary, some models and methods of investigation work much better than others, and we can test this. But these tests never give us nature as it is in itself, outside our ways of seeing and acting on things. Experience is just as fundamental to scientific knowledge as the physical reality it reveals.
The second problem concerns physicalism. According to the most reductive version of physicalism, science tells us that everything, including life, the mind and consciousness, can be reduced to the behaviour of the smallest material constituents. You’re nothing but your neurons, and your neurons are nothing but little bits of matter. Here, life and the mind are gone, and only lifeless matter exists.
To put it bluntly, the claim that there’s nothing but physical reality is either false or empty. If ‘physical reality’ means reality as physics describes it, then the assertion that only physical phenomena exist is false. Why? Because physical science – including biology and computational neuroscience – doesn’t include an account of consciousness. This is not to say that consciousness is something unnatural or supernatural. The point is that physical science doesn’t include an account of experience; but we know that experience exists, so the claim that the only things that exist are what physical science tells us is false. On the other hand, if ‘physical reality’ means reality according to some future and complete physics, then the claim that there is nothing else but physical reality is empty, because we have no idea what such a future physics will look like, especially in relation to consciousness.
This problem is known as Hempel’s dilemma, named after the illustrious philosopher of science Carl Gustav Hempel (1905-97). Faced with this quandary, some philosophers argue that we should define ‘physical’ such that it rules out radical emergentism (that life and the mind are emergent from but irreducible to physical reality) and panpsychism (that mind is fundamental and exists everywhere, including at the microphysical level). This move would give physicalism a definite content, but at the cost of trying to legislate in advance what ‘physical’ can mean, instead of leaving its meaning to be determined by physics.
We reject this move. Whatever ‘physical’ means should be determined by physics and not armchair reflection. After all, the meaning of the term ‘physical’ has changed dramatically since the 17th century. Matter was once thought to be inert, impenetrable, rigid, and subject only to deterministic and local interactions. Today, we know that this is wrong in virtually all respects: we accept that there are several fundamental forces, particles that have no mass, quantum indeterminacy, and nonlocal relations. We should expect further dramatic changes in our concept of physical reality in the future. For these reasons, we can’t simply legislate what the term ‘physical’ can mean as a way to get out of Hempel’s dilemma.
Objectivism and physicalism are philosophical ideas, not scientific ones – even if some scientists espouse them. They don’t logically follow from what science tells us about the physical world, or from the scientific method itself. By forgetting that these perspectives are a philosophical bias, not a mere data-point, scientific materialists ignore the ways that immediate experience and the world can never be separated.
We’re not alone in our opinions. Our account of the Blind Spot is based on the work of two major philosophers and mathematicians, Edmund Husserl and Alfred North Whitehead. Husserl, the German thinker who founded the philosophical movement of phenomenology, argued that lived experience is the source of science. It’s absurd, in principle, to think that science can step outside it. The ‘life-world’ of human experience is the ‘grounding soil’ of science, and the existential and spiritual crisis of modern scientific culture – what we are calling the Blind Spot – comes from forgetting its primacy.
Whitehead, who taught at Harvard University from the 1920s, argued that science relies on a faith in the order of nature that can’t be justified by logic. That faith rests directly on our immediate experience. Whitehead’s so-called process philosophy is based on a rejection of the ‘bifurcation of nature’, which divides immediate experience into the dichotomies of mind versus body, and perception versus reality. Instead, he argued that what we call ‘reality’ is made up of evolving processes that are equally physical and experiential.
Nowhere is the materialistic bias in science more apparent than quantum physics, the science of atoms and subatomic particles. Atoms, conceived as the building blocks of matter, have been with us since the Greeks. The discoveries of the past 100 years would seem to be a vindication for all those who have argued for an atomist, and reductionist, conception of nature. But what the Greeks, Isaac Newton and 19th-century scientists meant by the thing called an ‘atom’, and what we mean today, are very different. In fact, it’s the very notion of a ‘thing’ that quantum mechanics calls into question.
The classic model for bits of matter involves little billiard balls, clumping together and jostling around in various forms and states. In quantum mechanics, however, matter has the characteristics of both particles and waves. There are also limits to the precision with which measurements can be made, and measurements seem to disturb the reality that experimenters are trying to size up.
Today, interpretations of quantum mechanics disagree about what matter is, and what our role is with respect to it. These differences concern the so-called ‘measurement problem’: how the wave function of the electron reduces from a superposition of several states to a single state upon observation. For several schools of thought, quantum physics doesn’t give us access to the way the world fundamentally is in itself. Rather, it only lets us grasp how matter behaves in relation to our interactions with it.
According to the so-called Copenhagen interpretation of Niels Bohr, for example, the wave function has no reality outside of the interaction between the electron and the measurement device. Other approaches, such as the ‘many worlds’ and ‘hidden variables’ interpretations, seek to preserve an observer-independent status for the wave function. But this comes at the cost of adding features such as unobservable parallel universes. A relatively new interpretation known as Quantum-Bayesianism (QBism) – which combines quantum information theory and Bayesian probability theory – takes a different tack; it interprets the irreducible probabilities of a quantum state not as an element of reality, but as the degrees of belief an agent has about the outcome of a measurement. In other words, making a measurement is like making a bet on the world’s behaviour, and once the measurement is made, updating one’s knowledge. Advocates of this interpretation sometimes describe it as ‘participatory realism’, because human agency is woven into the process of doing physics as a means of gaining knowledge about the world. From this viewpoint, the equations of quantum physics don’t refer just to the observed atom but also to the observer and the atom taken as a whole in a kind of ‘observer-participancy’.
Participatory realism is controversial. But it’s precisely this plurality of interpretations, with a variety of philosophical implications, that undermines the sober certainty of the materialist and reductionist position on nature. In short, there’s still no simple way to remove our experience as scientists from the characterisation of the physical world.
This brings us back to the Blind Spot. When we look at the objects of scientific knowledge, we don’t tend to see the experiences that underpin them. We do not see how experience makes their presence to us possible. Because we lose sight of the necessity of experience, we erect a false idol of science as something that bestows absolute knowledge of reality, independent of how it shows up and how we interact with it.
The Blind Spot also reveals itself in the study of consciousness. Most scientific and philosophical discussions of consciousness focus on ‘qualia’ – the qualitative aspects of our experience, such as the perceived red glow of a sunset, or the sour taste of a lemon. Neuroscientists have established close correlations between such qualities and certain brain states, and they’ve been able to manipulate how we experience these qualities by acting directly on the brain. Nevertheless, there’s still no scientific explanation of qualia in terms of brain activity – or any other physical process for that matter. Nor is there any real understanding of what such an account would look like.
The mystery of consciousness includes more than just qualia. There’s also the question of subjectivity. Experiences have a subjective character; they occur in the first person. Why should a given sort of physical system have the feeling of being a subject? Science has no answer to this question.
At a deeper level, we might ask how experience comes to have a subject-object structure in the first place. Scientists and philosophers often work with the image of an ‘inside’ mind or subject grasping an outside world or object. But philosophers from different cultural traditions have challenged this image. For example, the philosopher William James (whose notion of ‘pure experience’ influenced Husserl and Whitehead) wrote in 1905 about the ‘active sense of living which we all enjoy, before reflection shatters our instinctive world for us’. That active sense of living doesn’t have an inside-outside/subject-object structure; it’s subsequent reflection that imposes this structure on experience.
More than a millennium ago, Vasubandhu, an Indian Buddhist philosopher of the 4th to 5th century CE, criticised the reification of phenomena into independent subjects versus independent objects. For Vasubandhu, the subject-object structure is a deep-seated, cognitive distortion of a causal network of phenomenal moments that are empty of an inner subject grasping an outer object.
To bring the point home, consider that in certain intense states of absorption – during meditation, dance or highly skilled performances – the subject-object structure can drop away, and we are left with a sense of sheer felt presence. How is such phenomenal presence possible in a physical world? Science is silent on this question. And yet, without such phenomenal presence, science is impossible, for presence is a precondition for any observation or measurement to be possible.
Scientific materialists will argue that the scientific method enables us to get outside of experience and grasp the world as it is in itself. As will be clear by now, we disagree; indeed, we believe that this way of thinking misrepresents the very method and practice of science.
In general terms, here’s how the scientific method works. First, we set aside aspects of human experience on which we can’t always agree, such as how things look or taste or feel. Second, using mathematics and logic, we construct abstract, formal models that we treat as stable objects of public consensus. Third, we intervene in the course of events by isolating and controlling things that we can perceive and manipulate. Fourth, we use these abstract models and concrete interventions to calculate future events. Fifth, we check these predicted events against our perceptions. An essential ingredient of this whole process is technology: machines – our equipment – that standardise these procedures, amplify our powers of perception, and allow us to control phenomena to our own ends.
The Blind Spot arises when we start to believe that this method gives us access to unvarnished reality. But experience is present at every step. Scientific models must be pulled out from observations, often mediated by our complex scientific equipment. They are idealisations, not actual things in the world. Galileo’s model of a frictionless plane, for example; the Bohr model of the atom with a small, dense nucleus with electrons circling around it in quantised orbits like planets around a sun; evolutionary models of isolated populations – all of these exist in the scientist’s mind, not in nature. They are abstract mental representations, not mind-independent entities. Their power comes from the fact that they’re useful for helping to make testable predictions. But these, too, never take us outside experience, for they require specific kinds of perceptions performed by highly trained observers.
For these reasons, scientific ‘objectivity’ can’t stand outside experience; in this context, ‘objective’ simply means something that’s true to the observations agreed upon by a community of investigators using certain tools. Science is essentially a highly refined form of human experience, based on our capacities to observe, act and communicate.
So the belief that scientific models correspond to how things truly are doesn’t follow from the scientific method. Instead, it comes from an ancient impulse – one often found in monotheistic religions – to know the world as it is in itself, as God does. The contention that science reveals a perfectly objective ‘reality’ is more theological than scientific.
Recent philosophers of science who target such ‘naive realism’ argue that science doesn’t culminate in a single picture of a theory-independent world. Rather, various aspects of the world – from chemical interactions to the growth and development of organisms, brain dynamics and social interactions – can be more or less successfully described by partial models. These models are always bound to our observations and actions, and circumscribed in their application.
The fields of complex systems theory and network science add mathematical precision to these claims by focusing on wholes rather than the reduction to parts. Complex systems theory is the study of systems, such as the brain, living organisms or the Earth’s global climate, whose behaviour is difficult to model: how the system responds depends on its state and context. Such systems exhibit self-organisation, spontaneous pattern-formation and sensitive dependence on initial conditions (very small changes to the initial conditions can lead to widely different outcomes).
Network science analyses complex systems by modelling their elements as nodes, and the connections between them as links. It explains behaviour in terms of network topologies – the arrangements of nodes and connections – and global dynamics, rather than in terms of local interactions at the micro level.
Inspired by these perspectives, we propose an alternative vision that seeks to move beyond the Blind Spot. Our experience and what we call ‘reality’ are inextricable. Scientific knowledge is a self-correcting narrative made from the world and our experience of it evolving together. Science and its most challenging problems can be reframed once we appreciate this entanglement.
Let’s return to the problem we started with, the question of time and the existence of a First Cause. Many religions have addressed the notion of a First Cause in their mythic creation narratives. To explain where everything comes from and how it originates, they assume the existence of an absolute power or deity that transcends the confines of space and time. With few exceptions, God or gods create from without to give rise to what is within.
Unlike myth, however, science is constrained by its conceptual framework to function along a causal chain of events. The First Cause is a clear rupture of such causation – as Buddhist philosophers pointed out long ago in their arguments against the Hindu theistic position that there must be a first divine cause. How could there be a cause that was not itself an effect of some other cause? The idea of a First Cause, like the idea of a perfectly objective reality, is fundamentally theological.
These examples suggest that ‘time’ will always have a human dimension. The best we can aim for is to construct a scientific cosmological account that is consistent with what we can measure and know of the Universe from inside. The account can’t ever be a final or complete description of cosmic history. Rather, it must be an ongoing, self-correcting narrative. ‘Time’ is the backbone of this narrative; our lived experience of time is necessary to make the narrative meaningful. With this insight, it seems it’s the physicist’s time that is secondary; it’s merely a tool to describe the changes we’re able to observe and measure in the natural world. The time of the physicist, then, depends for its meaning on our lived experience of time.
We can now appreciate the deeper significance of our three scientific conundrums – the nature of matter, consciousness and time. They all point back to the Blind Spot and the need to reframe how we think about science. When we try to understand reality by focusing only on physical things outside of us, we lose sight of the experiences they point back to. The deepest puzzles can’t be solved in purely physical terms, because they all involve the unavoidable presence of experience in the equation. There’s no way to render ‘reality’ apart from experience, because the two are always intertwined.
To finally ‘see’ the Blind Spot is to wake up from a delusion of absolute knowledge. It’s also to embrace the hope that we can create a new scientific culture, in which we see ourselves both as an expression of nature and as a source of nature’s self-understanding. We need nothing less than a science nourished by this sensibility for humanity to flourish in the new millennium.
---------------
Adam Frank is professor of astrophysics at the University of Rochester in New York. He is the author of several books, the latest being Light of the Stars: Alien Worlds and the Fate of the Earth (2018).
Marcelo Gleiser is a theoretical physicist at Dartmouth College in New Hampshire, where he is the Appleton professor of natural philosophy and professor of physics and astronomy, and the director of the Institute for Cross-Disciplinary Engagement (ICE). He is the author of The Island of Knowledge (2014).
Evan Thompson is professor of philosophy and a scholar at the Peter Wall Institute for Advanced Studies at the University of British Columbia in Vancouver. He is a Fellow of the Royal Society of Canada. His latest book is Waking, Dreaming, Being (2015).
It’s tempting to think science gives a God’s-eye view of reality. But we forget the place of human experience at our peril
The problem of time is one of the greatest puzzles of modern physics. The first bit of the conundrum is cosmological. To understand time, scientists talk about finding a ‘First Cause’ or ‘initial condition’ – a description of the Universe at the very beginning (or at ‘time equals zero’). But to determine a system’s initial condition, we need to know the total system. We need to make measurements of the positions and velocities of its constituent parts, such as particles, atoms, fields and so forth. This problem hits a hard wall when we deal with the origin of the Universe itself, because we have no view from the outside. We can’t step outside the box in order to look within, because the box is all there is. A First Cause is not only unknowable, but also scientifically unintelligible.
The second part of the challenge is philosophical. Scientists have taken physical time to be the only real time – whereas experiential time, the subjective sense of time’s passing, is considered a cognitive fabrication of secondary importance. The young Albert Einstein made this position clear in his debate with philosopher Henri Bergson in the 1920s, when he claimed that the physicist’s time is the only time. With age, Einstein became more circumspect. Up to the time of his death, he remained deeply troubled about how to find a place for the human experience of time in the scientific worldview.
These quandaries rest on the presumption that physical time, with an absolute starting point, is the only real kind of time. But what if the question of the beginning of time is ill-posed? Many of us like to think that science can give us a complete, objective description of cosmic history, distinct from us and our perception of it. But this image of science is deeply flawed. In our urge for knowledge and control, we’ve created a vision of science as a series of discoveries about how reality is in itself, a God’s-eye view of nature.
Such an approach not only distorts the truth, but creates a false sense of distance between ourselves and the world. That divide arises from what we call the Blind Spot, which science itself cannot see. In the Blind Spot sits experience: the sheer presence and immediacy of lived perception.
Behind the Blind Spot sits the belief that physical reality has absolute primacy in human knowledge, a view that can be called scientific materialism. In philosophical terms, it combines scientific objectivism (science tells us about the real, mind-independent world) and physicalism (science tells us that physical reality is all there is). Elementary particles, moments in time, genes, the brain – all these things are assumed to be fundamentally real. By contrast, experience, awareness and consciousness are taken to be secondary. The scientific task becomes about figuring out how to reduce them to something physical, such as the behaviour of neural networks, the architecture of computational systems, or some measure of information.
This framework faces two intractable problems. The first concerns scientific objectivism. We never encounter physical reality outside of our observations of it. Elementary particles, time, genes and the brain are manifest to us only through our measurements, models and manipulations. Their presence is always based on scientific investigations, which occur only in the field of our experience.
This doesn’t mean that scientific knowledge is arbitrary, or a mere projection of our own minds. On the contrary, some models and methods of investigation work much better than others, and we can test this. But these tests never give us nature as it is in itself, outside our ways of seeing and acting on things. Experience is just as fundamental to scientific knowledge as the physical reality it reveals.
The second problem concerns physicalism. According to the most reductive version of physicalism, science tells us that everything, including life, the mind and consciousness, can be reduced to the behaviour of the smallest material constituents. You’re nothing but your neurons, and your neurons are nothing but little bits of matter. Here, life and the mind are gone, and only lifeless matter exists.
To put it bluntly, the claim that there’s nothing but physical reality is either false or empty. If ‘physical reality’ means reality as physics describes it, then the assertion that only physical phenomena exist is false. Why? Because physical science – including biology and computational neuroscience – doesn’t include an account of consciousness. This is not to say that consciousness is something unnatural or supernatural. The point is that physical science doesn’t include an account of experience; but we know that experience exists, so the claim that the only things that exist are what physical science tells us is false. On the other hand, if ‘physical reality’ means reality according to some future and complete physics, then the claim that there is nothing else but physical reality is empty, because we have no idea what such a future physics will look like, especially in relation to consciousness.
This problem is known as Hempel’s dilemma, named after the illustrious philosopher of science Carl Gustav Hempel (1905-97). Faced with this quandary, some philosophers argue that we should define ‘physical’ such that it rules out radical emergentism (that life and the mind are emergent from but irreducible to physical reality) and panpsychism (that mind is fundamental and exists everywhere, including at the microphysical level). This move would give physicalism a definite content, but at the cost of trying to legislate in advance what ‘physical’ can mean, instead of leaving its meaning to be determined by physics.
We reject this move. Whatever ‘physical’ means should be determined by physics and not armchair reflection. After all, the meaning of the term ‘physical’ has changed dramatically since the 17th century. Matter was once thought to be inert, impenetrable, rigid, and subject only to deterministic and local interactions. Today, we know that this is wrong in virtually all respects: we accept that there are several fundamental forces, particles that have no mass, quantum indeterminacy, and nonlocal relations. We should expect further dramatic changes in our concept of physical reality in the future. For these reasons, we can’t simply legislate what the term ‘physical’ can mean as a way to get out of Hempel’s dilemma.
Objectivism and physicalism are philosophical ideas, not scientific ones – even if some scientists espouse them. They don’t logically follow from what science tells us about the physical world, or from the scientific method itself. By forgetting that these perspectives are a philosophical bias, not a mere data-point, scientific materialists ignore the ways that immediate experience and the world can never be separated.
We’re not alone in our opinions. Our account of the Blind Spot is based on the work of two major philosophers and mathematicians, Edmund Husserl and Alfred North Whitehead. Husserl, the German thinker who founded the philosophical movement of phenomenology, argued that lived experience is the source of science. It’s absurd, in principle, to think that science can step outside it. The ‘life-world’ of human experience is the ‘grounding soil’ of science, and the existential and spiritual crisis of modern scientific culture – what we are calling the Blind Spot – comes from forgetting its primacy.
Whitehead, who taught at Harvard University from the 1920s, argued that science relies on a faith in the order of nature that can’t be justified by logic. That faith rests directly on our immediate experience. Whitehead’s so-called process philosophy is based on a rejection of the ‘bifurcation of nature’, which divides immediate experience into the dichotomies of mind versus body, and perception versus reality. Instead, he argued that what we call ‘reality’ is made up of evolving processes that are equally physical and experiential.
Nowhere is the materialistic bias in science more apparent than quantum physics, the science of atoms and subatomic particles. Atoms, conceived as the building blocks of matter, have been with us since the Greeks. The discoveries of the past 100 years would seem to be a vindication for all those who have argued for an atomist, and reductionist, conception of nature. But what the Greeks, Isaac Newton and 19th-century scientists meant by the thing called an ‘atom’, and what we mean today, are very different. In fact, it’s the very notion of a ‘thing’ that quantum mechanics calls into question.
The classic model for bits of matter involves little billiard balls, clumping together and jostling around in various forms and states. In quantum mechanics, however, matter has the characteristics of both particles and waves. There are also limits to the precision with which measurements can be made, and measurements seem to disturb the reality that experimenters are trying to size up.
Today, interpretations of quantum mechanics disagree about what matter is, and what our role is with respect to it. These differences concern the so-called ‘measurement problem’: how the wave function of the electron reduces from a superposition of several states to a single state upon observation. For several schools of thought, quantum physics doesn’t give us access to the way the world fundamentally is in itself. Rather, it only lets us grasp how matter behaves in relation to our interactions with it.
According to the so-called Copenhagen interpretation of Niels Bohr, for example, the wave function has no reality outside of the interaction between the electron and the measurement device. Other approaches, such as the ‘many worlds’ and ‘hidden variables’ interpretations, seek to preserve an observer-independent status for the wave function. But this comes at the cost of adding features such as unobservable parallel universes. A relatively new interpretation known as Quantum-Bayesianism (QBism) – which combines quantum information theory and Bayesian probability theory – takes a different tack; it interprets the irreducible probabilities of a quantum state not as an element of reality, but as the degrees of belief an agent has about the outcome of a measurement. In other words, making a measurement is like making a bet on the world’s behaviour, and once the measurement is made, updating one’s knowledge. Advocates of this interpretation sometimes describe it as ‘participatory realism’, because human agency is woven into the process of doing physics as a means of gaining knowledge about the world. From this viewpoint, the equations of quantum physics don’t refer just to the observed atom but also to the observer and the atom taken as a whole in a kind of ‘observer-participancy’.
Participatory realism is controversial. But it’s precisely this plurality of interpretations, with a variety of philosophical implications, that undermines the sober certainty of the materialist and reductionist position on nature. In short, there’s still no simple way to remove our experience as scientists from the characterisation of the physical world.
This brings us back to the Blind Spot. When we look at the objects of scientific knowledge, we don’t tend to see the experiences that underpin them. We do not see how experience makes their presence to us possible. Because we lose sight of the necessity of experience, we erect a false idol of science as something that bestows absolute knowledge of reality, independent of how it shows up and how we interact with it.
The Blind Spot also reveals itself in the study of consciousness. Most scientific and philosophical discussions of consciousness focus on ‘qualia’ – the qualitative aspects of our experience, such as the perceived red glow of a sunset, or the sour taste of a lemon. Neuroscientists have established close correlations between such qualities and certain brain states, and they’ve been able to manipulate how we experience these qualities by acting directly on the brain. Nevertheless, there’s still no scientific explanation of qualia in terms of brain activity – or any other physical process for that matter. Nor is there any real understanding of what such an account would look like.
The mystery of consciousness includes more than just qualia. There’s also the question of subjectivity. Experiences have a subjective character; they occur in the first person. Why should a given sort of physical system have the feeling of being a subject? Science has no answer to this question.
At a deeper level, we might ask how experience comes to have a subject-object structure in the first place. Scientists and philosophers often work with the image of an ‘inside’ mind or subject grasping an outside world or object. But philosophers from different cultural traditions have challenged this image. For example, the philosopher William James (whose notion of ‘pure experience’ influenced Husserl and Whitehead) wrote in 1905 about the ‘active sense of living which we all enjoy, before reflection shatters our instinctive world for us’. That active sense of living doesn’t have an inside-outside/subject-object structure; it’s subsequent reflection that imposes this structure on experience.
More than a millennium ago, Vasubandhu, an Indian Buddhist philosopher of the 4th to 5th century CE, criticised the reification of phenomena into independent subjects versus independent objects. For Vasubandhu, the subject-object structure is a deep-seated, cognitive distortion of a causal network of phenomenal moments that are empty of an inner subject grasping an outer object.
To bring the point home, consider that in certain intense states of absorption – during meditation, dance or highly skilled performances – the subject-object structure can drop away, and we are left with a sense of sheer felt presence. How is such phenomenal presence possible in a physical world? Science is silent on this question. And yet, without such phenomenal presence, science is impossible, for presence is a precondition for any observation or measurement to be possible.
Scientific materialists will argue that the scientific method enables us to get outside of experience and grasp the world as it is in itself. As will be clear by now, we disagree; indeed, we believe that this way of thinking misrepresents the very method and practice of science.
In general terms, here’s how the scientific method works. First, we set aside aspects of human experience on which we can’t always agree, such as how things look or taste or feel. Second, using mathematics and logic, we construct abstract, formal models that we treat as stable objects of public consensus. Third, we intervene in the course of events by isolating and controlling things that we can perceive and manipulate. Fourth, we use these abstract models and concrete interventions to calculate future events. Fifth, we check these predicted events against our perceptions. An essential ingredient of this whole process is technology: machines – our equipment – that standardise these procedures, amplify our powers of perception, and allow us to control phenomena to our own ends.
The Blind Spot arises when we start to believe that this method gives us access to unvarnished reality. But experience is present at every step. Scientific models must be pulled out from observations, often mediated by our complex scientific equipment. They are idealisations, not actual things in the world. Galileo’s model of a frictionless plane, for example; the Bohr model of the atom with a small, dense nucleus with electrons circling around it in quantised orbits like planets around a sun; evolutionary models of isolated populations – all of these exist in the scientist’s mind, not in nature. They are abstract mental representations, not mind-independent entities. Their power comes from the fact that they’re useful for helping to make testable predictions. But these, too, never take us outside experience, for they require specific kinds of perceptions performed by highly trained observers.
For these reasons, scientific ‘objectivity’ can’t stand outside experience; in this context, ‘objective’ simply means something that’s true to the observations agreed upon by a community of investigators using certain tools. Science is essentially a highly refined form of human experience, based on our capacities to observe, act and communicate.
So the belief that scientific models correspond to how things truly are doesn’t follow from the scientific method. Instead, it comes from an ancient impulse – one often found in monotheistic religions – to know the world as it is in itself, as God does. The contention that science reveals a perfectly objective ‘reality’ is more theological than scientific.
Recent philosophers of science who target such ‘naive realism’ argue that science doesn’t culminate in a single picture of a theory-independent world. Rather, various aspects of the world – from chemical interactions to the growth and development of organisms, brain dynamics and social interactions – can be more or less successfully described by partial models. These models are always bound to our observations and actions, and circumscribed in their application.
The fields of complex systems theory and network science add mathematical precision to these claims by focusing on wholes rather than the reduction to parts. Complex systems theory is the study of systems, such as the brain, living organisms or the Earth’s global climate, whose behaviour is difficult to model: how the system responds depends on its state and context. Such systems exhibit self-organisation, spontaneous pattern-formation and sensitive dependence on initial conditions (very small changes to the initial conditions can lead to widely different outcomes).
Network science analyses complex systems by modelling their elements as nodes, and the connections between them as links. It explains behaviour in terms of network topologies – the arrangements of nodes and connections – and global dynamics, rather than in terms of local interactions at the micro level.
Inspired by these perspectives, we propose an alternative vision that seeks to move beyond the Blind Spot. Our experience and what we call ‘reality’ are inextricable. Scientific knowledge is a self-correcting narrative made from the world and our experience of it evolving together. Science and its most challenging problems can be reframed once we appreciate this entanglement.
Let’s return to the problem we started with, the question of time and the existence of a First Cause. Many religions have addressed the notion of a First Cause in their mythic creation narratives. To explain where everything comes from and how it originates, they assume the existence of an absolute power or deity that transcends the confines of space and time. With few exceptions, God or gods create from without to give rise to what is within.
Unlike myth, however, science is constrained by its conceptual framework to function along a causal chain of events. The First Cause is a clear rupture of such causation – as Buddhist philosophers pointed out long ago in their arguments against the Hindu theistic position that there must be a first divine cause. How could there be a cause that was not itself an effect of some other cause? The idea of a First Cause, like the idea of a perfectly objective reality, is fundamentally theological.
These examples suggest that ‘time’ will always have a human dimension. The best we can aim for is to construct a scientific cosmological account that is consistent with what we can measure and know of the Universe from inside. The account can’t ever be a final or complete description of cosmic history. Rather, it must be an ongoing, self-correcting narrative. ‘Time’ is the backbone of this narrative; our lived experience of time is necessary to make the narrative meaningful. With this insight, it seems it’s the physicist’s time that is secondary; it’s merely a tool to describe the changes we’re able to observe and measure in the natural world. The time of the physicist, then, depends for its meaning on our lived experience of time.
We can now appreciate the deeper significance of our three scientific conundrums – the nature of matter, consciousness and time. They all point back to the Blind Spot and the need to reframe how we think about science. When we try to understand reality by focusing only on physical things outside of us, we lose sight of the experiences they point back to. The deepest puzzles can’t be solved in purely physical terms, because they all involve the unavoidable presence of experience in the equation. There’s no way to render ‘reality’ apart from experience, because the two are always intertwined.
To finally ‘see’ the Blind Spot is to wake up from a delusion of absolute knowledge. It’s also to embrace the hope that we can create a new scientific culture, in which we see ourselves both as an expression of nature and as a source of nature’s self-understanding. We need nothing less than a science nourished by this sensibility for humanity to flourish in the new millennium.
---------------
Adam Frank is professor of astrophysics at the University of Rochester in New York. He is the author of several books, the latest being Light of the Stars: Alien Worlds and the Fate of the Earth (2018).
Marcelo Gleiser is a theoretical physicist at Dartmouth College in New Hampshire, where he is the Appleton professor of natural philosophy and professor of physics and astronomy, and the director of the Institute for Cross-Disciplinary Engagement (ICE). He is the author of The Island of Knowledge (2014).
Evan Thompson is professor of philosophy and a scholar at the Peter Wall Institute for Advanced Studies at the University of British Columbia in Vancouver. He is a Fellow of the Royal Society of Canada. His latest book is Waking, Dreaming, Being (2015).
Labels:
blind spot,
first cause,
materialism,
physicalism,
science,
scientific,
scientism
Subscribe to:
Posts (Atom)