Poisoning the well

Certainty is everywhere, fundamentalism is in full bloom. Legions of authorities cloaked in total conviction tell us why we should invade country X, ban The Adventures of Huckleberry Finn in schools, or eat stewed tomatoes; how much brain damage is necessary to justify a plea of diminished capacity; the precise moment when a sperm and an egg must be treated as a human being; and why the stock market will eventually revert to historical returns. A public change of mind is national news.

This is the first paragraph from On Being Certain by Robert A. Burton. I am just a couple of pages in, but this has stopped me short.

The book is ostensibly about the biological origins of the feeling of knowing. How it is separate from “reason” and logic. But this paragraph perfectly encapsulates how the way society frames issues ignores and even rewards unwarranted certainty.

Modern media has an endemic sense of certainty. Journalistic convention is based on an underlying assumption of causation, of the world in front of us as the direct result of something that can be tracked down and explained. Something happened so there must be someone to talk to, or a bang that preceded it.

There’s no way it’s unknowable, or the result of complex interactions we can only tease out with time and after making many assumptions. Dogged by problems of measurement and perception. As a result you get a lot of declarative statements, black and white.

When a professional athlete is doing well, for instance, we are furnished with stories of their extensive workouts. When they do poorly we hear about their troubled childhood and off-court issues. Or maybe they just suck now. There’s little room for underlying randomness, problematic measurements, statistical noise and mean reversion. A cause must be found and responsibility taken.

The issue here is the need for a narrative. As a journalist, narrative is an important tool for grabbing someone’s attention, keeping it, and guiding them through a larger point. Or to highlight something specific and make it memorable.

But what does a narrative need? In this context it almost always entails simplistic cause and effect.

At the end of many news bulletins we get a financial update. We hear how this currency rose, a stockmarket over there fell, after hours trading is stagnant etc. All fair enough, except that they’re often immediately tied to a news hook.

The Yen went down cause a Yeti was spotted in Turkey. The Nasdaq rose cause Dutch tulips were especially vibrant this year. Sunspots.

Narrative is useful for audiences to connect with this kind of abstraction. But there’s no way causation for activities this complex were nailed down in the time between the signal and the news piece, if they ever can be.

These triggers often are big enough to have some association, but how much? How that was figured out is honestly an even better story.

That last line from Burton, about a change of mind being national news is also deserving of unpicking.

It is fair enough that leaders changing their minds about something is news. But the problem is in how it is approached. How often is the story about the change itself rather than what underlay the previous “belief” and how that changed? How good is the information, or, if that hasn’t changed, the mental model that reinterpreted it?

Beliefs often aren’t a binary proposition, especially when it comes to policy. Rather, they are about juggling trade offs, dogged by information asymmetries and stretched resources (mental and otherwise).

But here I am also falling into the trap of beliefs as the function of logic and reason. As I’ve documented here, beliefs have many potential fathers. Perhaps biology is one.

Where does your belief come from?

I’ve posted a lot of stuff on this blog questioning the foundations of belief.

How our beliefs are the function of the people we surround ourself with or are built during our formative years and then ossify. How language and culture both inform and limit what we can take in. And that much appears to be stuck in formative states.

But twice this week I’ve come across arguments that our “beliefs” are actually so transitory and shallow that they are all but meaningless.

That we are so riven with contradictions and so lacking a coherent world view that our “beliefs” are little more than fleeting notions backed up by post hoc rationing.

First in an old New Yorker article that makes me seriously question ever again trying to change someone’s mind.

And now in my continued reading of The Hidden Half:

We asked our volunteers to choose their political priorities on a scale of 1 to 10. For example, what would you do if it came to a choice whether the country should spend more on state-provided healthcare, or spend less and cut taxes (where 1 was definitely spend more and 10 was definitely cut tax)?…A short while later, we went back to talk over with our volunteers what they’d written and why. But we cheated. We left their original answer sheet as it was–written in their own hand with their names at the top to help convince them nothing fishy was going on. But where their answers were anywhere from 3 to 7–so not a definite 1 or a definite, uncompromising 10–we flipped the question around.

These are quite long quotes, but bear with me.

The partially handwritten page in front of them was evidence of what they believed–or so they thought. And it was this (doctored) opinion that they now defended. I sat down with a man who originally said that tax cuts were more important than more spending on state healthcare–and listened as he now explained why the opposite was true. His explanation was earnest, intelligent, clear, without hesitation. He wasn’t confused. He accepted this new position as a legitimate summary of his beliefs and didn’t miss a beat in justifying them.

I’ve read of studies where people surrender their opinion in the face of a majority or authority figure.

But that we are so intellectually supplicant that an unrecorded belief is essentially meaningless has quite thrown me. And that we could be dictated to by a recorded belief – even a false one – even more so.

To a certain extent this merely lines up with previous arguments in the book about complexity and simplification. But the lack of stability in the “lens” we use to understand the world – that I can’t feed you similar information over and over and expect a somewhat predictable response – has huge implications for discourse and institutions.

Let me end with a concluding remark from this section of the book:

…the ideal of holding a complete picture in our heads damns our capabilities with an impossible aspiration. The world, quite simply, is too complicated, too big, too messy, to frame in one go. The fact that we observe it in often contradictory fragments is also a measure of the enormity of the perceptual ask.

As always my emphasis

It’s about who you know and trust

There is a pervasive idea in Western culture that humans are essentially rational, deftly sorting fact from fiction, and, ultimately, arriving at timeless truths about the world. This line of thinking holds that humans follow the rules of logic, calculate probabilities accurately, and make decisions about the world that are perfectly informed by all available information. Conversely, failures to make effective and well-informed decisions are often chalked up to failures of human reasoning—resulting, say, from psychological tics or cognitive biases… Models of social learning help us see that this picture of human learning and rationality is dangerously distorted. What we see in these models is that even perfectly rational—albeit simple—agents who learn from others in their social network can fail to form true beliefs about the world, even when more than adequate evidence is available. In other words, individually rational agents can form groups that are not rational at all.

This is from The Misinformation Age by Cailin O’Connor and James Owen Weatherall, which I first referenced a couple of days ago.

At first glance that your beliefs are a product of the people you surround yourself with is quite banal. Of course most us haven’t personally verified even a fraction of our “knowledge” – from the mathematical heuristics we learn in school to the actual size of Greenland.

In fact the ability to share information – both bad and good – is a major factor in our success as a species.

But unpack this a little more – as the authors of this book do masterfully – and it starts to dawn how devastating poor information hygiene really is. Your personal store of knowledge or model of the world isn’t so much the product of your own “filter”, but the filter of those around you. And around them. And around them….

And, as the models that O’Connor and Weatherall construct show, it isn’t just deliberate misinformation that can affect those in a social network. In fact, misinformation is only one way companies etc. can  subtly put their finger on the scale.

Rather, both deliberate and unconscious misunderstanding of uncertainty and randomness can filter through to the unwitting, not least by curtailing the possible or sending us down wrong tracks. And this is before adding the complexities of conformity bias, clustering and selection, distrust etc.

So, what to do? I don’t know. But consider this:

…we need to understand the social character of belief—and recognize that widespread falsehood is a necessary, but harmful, corollary to our most powerful tools for learning truths… When we open channels for social communication, we immediately face a trade-off. If we want to have as many true beliefs as possible, we should trust everything we hear. This way, every true belief passing through our social network also becomes part of our belief system. And if we want to minimize the number of false beliefs we have, we should not believe anything.

As always, my emphasis.

Everyone wants to be the hero

Whenever there’s an economic incentive to get people to believe something, you’re going to find organizations doing their best to get out the evidence that supports their case. But they may not think of themselves as propagandists. They may simply be engaging in the kind of motivated reasoning that all of us engage in. They’re finding the evidence that happens to support the beliefs they already have. They want whatever it is that they believe to be true. They don’t want to feel like they’re bad people. They’re trying to get the best information out there.

This from a fantastic interview with philosophers Cailin O’Connor and James Owen Weatherall. They have just written a book about how misinformation spreads.

I’ve just downloaded the book and plan to dig into it, but this passage strikes at a tendency many have to want a villain.

I often hear people talk about oil companies (etc.) suppressing climate change research. It now seems like they did know about climate change long ago, but were those executives really sitting in front of a fireplace stroking a white cat?

It seems like it would be more useful, maybe even more accurate, to view them as exactly like the rest of us. We all want to be the heroes of our own stories. None of us want to be wrong. We all dig in, especially given perverse incentives.

We all engage in motivated reasoning, among other scary mental shortcuts and fallibilities.

Rather than treating them as deviant or Machiavellian, surely it’s healthier to realise many of us would react the same given a similar position? At the very least it won’t shut down the conversation.

Once someone in the conversation is evil there is very little room to move – look at contemporary political discourse. Everyone wants to be the hero. That’s the only way we get anywhere.

Time to update our democratic models

Throughout childhood and until late adolescence, our brains are building their internal models of what is out there and how it all works –physical, social, emotional and so on. After that, our core beliefs harden and we find change, according to Professor of Psychiatry Bruce Wexler, ‘difficult and painful’. The power of our many cognitive biases skews our view. We attack unwelcome information. The gravity of our personal worlds attracts us to other, similar worlds –people who ‘see it like we do’, whose opinions give us the warm, reassuring pleasure of comfort, familiarity, safety. It all thickens the illusion that our way is the trueway.

I’ve just finished reading The Heretics by Will Storr. It’s part investigation, part memoir, as Storr embeds with homeopaths, faith healers, neo nazis and others with “weird beliefs”.

I’m slowly going through my notes and may pull out some more, but the thing that consistently struck me throughout is what this means for institutional design.

Our democracies absolutely were not built, and have not evolved, with our more sophisticated understanding of how people build beliefs and make decisions. How fallible our memories are, how we capitulate to group think, react and then build post hoc justifications (etc.).

Meanwhile those who wish to take advantage of us certainly have.

In that strange, chemical and alchemical moment when an unconscious decision is made about what to believe, how much is genetic, how much is rational, how much is concerned solely with reinforcing our dearly held models of the world? And how does personality collide with all of this? How does the character of the decider – all that complex emotionality, the calculation of possible outcomes, the current state of mind, the kaleidoscope of motives, the autobiographical heromission – pollute the process? With these questions, we have struck rock. There is no answer.

(My emphasis)