It’s about who you know and trust

There is a pervasive idea in Western culture that humans are essentially rational, deftly sorting fact from fiction, and, ultimately, arriving at timeless truths about the world. This line of thinking holds that humans follow the rules of logic, calculate probabilities accurately, and make decisions about the world that are perfectly informed by all available information. Conversely, failures to make effective and well-informed decisions are often chalked up to failures of human reasoning—resulting, say, from psychological tics or cognitive biases… Models of social learning help us see that this picture of human learning and rationality is dangerously distorted. What we see in these models is that even perfectly rational—albeit simple—agents who learn from others in their social network can fail to form true beliefs about the world, even when more than adequate evidence is available. In other words, individually rational agents can form groups that are not rational at all.

This is from The Misinformation Age by Cailin O’Connor and James Owen Weatherall, which I first referenced a couple of days ago.

At first glance that your beliefs are a product of the people you surround yourself with is quite banal. Of course most us haven’t personally verified even a fraction of our “knowledge” – from the mathematical heuristics we learn in school to the actual size of Greenland.

In fact the ability to share information – both bad and good – is a major factor in our success as a species.

But unpack this a little more – as the authors of this book do masterfully – and it starts to dawn how devastating poor information hygiene really is. Your personal store of knowledge or model of the world isn’t so much the product of your own “filter”, but the filter of those around you. And around them. And around them….

And, as the models that O’Connor and Weatherall construct show, it isn’t just deliberate misinformation that can affect those in a social network. In fact, misinformation is only one way companies etc. can  subtly put their finger on the scale.

Rather, both deliberate and unconscious misunderstanding of uncertainty and randomness can filter through to the unwitting, not least by curtailing the possible or sending us down wrong tracks. And this is before adding the complexities of conformity bias, clustering and selection, distrust etc.

So, what to do? I don’t know. But consider this:

…we need to understand the social character of belief—and recognize that widespread falsehood is a necessary, but harmful, corollary to our most powerful tools for learning truths… When we open channels for social communication, we immediately face a trade-off. If we want to have as many true beliefs as possible, we should trust everything we hear. This way, every true belief passing through our social network also becomes part of our belief system. And if we want to minimize the number of false beliefs we have, we should not believe anything.

As always, my emphasis.

Let’s stop poisoning ourselves

A horrifying working paper looks at the impact of traffic pollution on students in schools downwind of a major US highway:

We find that attending school where prevailing winds place it downwind of a nearby highway more than 60% of the time is associated with 0.040 of a standard deviation lower test scores, a 4.1 percentage point increase in behavioral incidents, and a 0.5 percentage point increase in the rate of absences over the school year, compared to attending a school upwind of a highway the same distance away

6.4 million American children attend a school within 250m of a highway, according to the researchers.

Children may be particularly susceptible to the carbon monoxide etc. that comes out of exhaust pipes, but really any of us who work or live near traffic should be worried.

If climate change is too abstract, far away or big for us to tackle, this shouldn’t be. The cost-benefit of transport (for starters) is clear and present. At the very least we all need to drive less.

And we’re not just talking about immediate impacts – think of how some of these issues (lower test scores, poor behaviour etc.) compound, as people get labelled troublesome or miss opportunities.

Air pollution, in other words, feeds inequality.

Other papers cited in this study illustrate the breadth of the issue:

…Currie and colleagues found that high levels of carbon monoxide were associated with reduced school attendance… Ransom and Pope (1992) similarly found a relationship between pollution and school attendance, with more small particulate matter in the air associated with more absences… Chang et al. (2016a, 2016b) use hourly variation to show that increased exposure to fine particulate matter decreases productivity per hour of pear packers and call center workers, while Archsmith, Heyes, and Saberian (2018) showed that baseball umpires make more mistakes on days with higher pollution…Herrnstadt and Muehlegger (2015) argue that traffic pollution influences impulse control. They showed that short-term hourly variation in wind direction in Chicago lead to higher crime in areas downwind of highways than on the opposite upwind side…

Also, as an addendum to my last post about randomness and test scores:

Marcotte (2017) used the variation in air quality on different testing days and found that children who took tests on worse days for pollen and fine airborne particulate matter had worse outcomes.

As usual my emphasis.

What are school tests trying to measure?

I’ve just started reading The Lady Tasting Tea, the story of statistics in/and modern science. But one of the early examples has gotten me thinking – how would a scientist go about testing the general intelligence/retained knowledge of a group of students?

Given:

Whatever we measure is really part of a random scatter, whose probabilities are described by a mathematical function, the distribution function.

It seems unlikely a contemporary scientist dropped onto planet B would propose the kind of one-and-done tests that students generally encounter at the end of subjects, semesters, years and school itself.

From the book:

Consider a simple example from the experience of a teacher with a particular student. The teacher is interested in finding some measure of how much the child has learned. To this end, the teacher “experiments” by giving the child a group of tests. Each test is marked on a scale from 0 to 100. Any one test provides a poor estimate of how much the child knows. It may be that the child did not study the few things that were on that test but knows a great deal about things that were not on the test. The child may have had a headache the day she took a particular test. The child may have had an argument with parents the morning of a particular test. For many reasons, one test does not provide a good estimate of knowledge. So, the teacher gives a set of tests. The average score from all those tests is taken as a better estimate of how much the child knows. How much the child knows is the outcome. The scores on individual tests are the data.

I’m quite biased here as I’m absolutely horrid at standardised testing – for a variety of reasons including medical. But it does seem to be yet another aspect of schooling that should be updated given our increasingly sophisticated understanding of the world. Randomness is not to be messed with.

(As usual my emphasis)

Everyone wants to be the hero

Whenever there’s an economic incentive to get people to believe something, you’re going to find organizations doing their best to get out the evidence that supports their case. But they may not think of themselves as propagandists. They may simply be engaging in the kind of motivated reasoning that all of us engage in. They’re finding the evidence that happens to support the beliefs they already have. They want whatever it is that they believe to be true. They don’t want to feel like they’re bad people. They’re trying to get the best information out there.

This from a fantastic interview with philosophers Cailin O’Connor and James Owen Weatherall. They have just written a book about how misinformation spreads.

I’ve just downloaded the book and plan to dig into it, but this passage strikes at a tendency many have to want a villain.

I often hear people talk about oil companies (etc.) suppressing climate change research. It now seems like they did know about climate change long ago, but were those executives really sitting in front of a fireplace stroking a white cat?

It seems like it would be more useful, maybe even more accurate, to view them as exactly like the rest of us. We all want to be the heroes of our own stories. None of us want to be wrong. We all dig in, especially given perverse incentives.

We all engage in motivated reasoning, among other scary mental shortcuts and fallibilities.

Rather than treating them as deviant or Machiavellian, surely it’s healthier to realise many of us would react the same given a similar position? At the very least it won’t shut down the conversation.

Once someone in the conversation is evil there is very little room to move – look at contemporary political discourse. Everyone wants to be the hero. That’s the only way we get anywhere.

How to ask a good question

My coding odyssey continues and as a result I stumbled across the how to ask a good question page on Stack Overflow. It’s a site for people to ask questions of a large community of coders (I shan’t share what led me to browse the help centre of a help centre 😇).

While much of the page is understandably specific to questions about coding, reading it gave me several thoughts for some universal guidance for good questions.

Pretend you’re talking to a busy colleague and have to sum up your entire question in one sentence: what details can you include that will help someone identify and solve your problem? Include any error messages, key APIs, or unusual circumstances that make your question different from similar questions already on the site…

• If you’re having trouble summarizing the problem, write the title last – sometimes writing the rest of the question first can make it easier to describe the problem.

So often people come with a simple question or problem that they have buried in so much story/minutiae as to make it boring/unintelligible. But if you really want to find an answer, and quickly, it’s probably best to approach it like click bait.

What are the details that will hook me into your question? Can you summarise as to confirm I even know the answer?

The busy colleague is a good device. When I first started as a journalist my producer told me something similar – I should pitch ideas to him as if he was a stranger in a pub who would leave or find me boring if given a long preamble.

Then:

In the body of your question, start by expanding on the summary you put in the title. Explain how you encountered the problem you’re trying to solve, and any difficulties that have prevented you from solving it yourself. The first paragraph in your question is the second thing most readers will see, so make it as engaging and informative as possible.

Help others reproduce the problem

Not all questions benefit from including code. But if your problem is with code you’ve written, you should include some. But don’t just copy in your entire program! Not only is this likely to get you in trouble if you’re posting your employer’s code, it likely includes a lot of irrelevant details that readers will need to ignore when trying to reproduce the problem. Here are some guidelines:

Include just enough code to allow others to reproduce the problem. For help with this, read How to create a Minimal, Complete, and Verifiable example.

Good questions contain context and are rarely just one question (more reason press conferences and panels are bad).

When I’m really trying to pick someone’s brain or find an answer I often break questions down into their component parts. If it’s code we need to establish we’re all using the same version. This applies to basically everything.

Questions can go awry when we’re each making assumptions about intentions, definitions, steps etc. So if you’re trying to find something out it’s often best to start small, at first principles.

You can then walk through the problem with the person, just as the respondents on Stack will try and replicate problems. Sometimes you may discover you’re asking the wrong question. Are the assumptions baked into the question the real answer?

Obviously this only works in a medium where you can go back and forth.

Lastly there needs to be some amount of good faith. This is why I like anonymous questions delivered by a moderator at events, and why short interviews make little sense. Is the question a genuine attempt at knowledge or is it trying to convey something else?

As usual my emphasis.

Subscribe!

Subscribe for a weekly digest of new posts.

You have Successfully Subscribed!