There probably isn’t one reason

I finally finished The Hidden Half. It’s one of the best books I’ve read in a while, and ties together much of my reading and thinking over the past year or so.

As a recovering determinist, I relish the celebration of uncertainty and the unknown. I’ve written quite a bit as I’ve read along. But here’s one more thought – the implications of uncertainty for silver bullets.

As much as we try to make the world bend to our will, there likely isn’t just one reason for anything. And so there probably isn’t one solution for it either.

…The biggest things are unusual by definition. Unusual things often result from an alignment or interaction of many circumstances – that’s why they turn out big. By their nature, these will be harder to understand. However, this does not mean we have failed to research them as well as reasonably possible: in a world of enigmatic influences, research rigour does not equal nailing down. The best answer might be that there is no answer.

The bigger the thing you’re trying to tackle or explain, the more influences it will likely have. Including ones you can’t see or measure. If you remove any of these jenga blocks, will your notion stand up?

This makes transplanting explanations or “solutions” from one context to another incredibly problematic. Your idea may have “fixed” the problem over there – and that’s a big if. But do you really know why? What about all the factors underlying that?

History is littered with simple solutions to complex problems and we’re all prone to creating panaceas. Modern democracies, especially, incentivise simple explanations rather than waiting, seeing and experimentation.

But the world defies being put in a box.

This is why public policies so often miss or fail entirely. Complex problems have complex causes and likely require nuanced and adaptable solutions. That it’s worked before or fits a particular world view isn’t enough.

…A favourite big thing, a silver bullet, has so many advantages: it’s easier to sell, to describe, to understand, to put into practice. But whether the thing we pick would travel, on its own, to another context is another question. Silver bullets seldom work once, never mind twice.

As I have written previously, what this requires is a little more humility, as well as institutions and a culture that can accept uncertainty and not knowing. Working with best approximations and striving to improve them.

As always my emphasis.

A plea for more humility about what we ‘“know”

…we can’t help turning up our pattern-making instinct to 11–when life offers only a 5. Too often, we make bold claims about big forces with law-like effects, but with culpable overconfidence that leads us to waste time, money, talent and energy, and detract from real progress… I’d like our claims to be more proportionate to the awkwardness of the task. Every new generation needs reminding of the overconfidence of every previous generation, of how much there is still to know and do, and, above all, how resistant the raw materials of life can be.

Reading books like Thinking In Bets, The Lady Tasting Tea and The Drunkards Walk, it’s hard not to be thoroughly disaffected with the deterministic model of the universe most of us carry in our heads.

Green tea causes weight loss, your aunt tells you. You should try get into that school cause it’s the best, they say.

In fact, it’s tempting to draw this back to school, where we’re taught to find the right answer, not the best approximation of one. Confounding, selection, randomness and the dozens of other thorns in simple causation aren’t even really hinted at.

It’s like a civilisation-wide Dunning-Kruger effect. We engage in pattern matching, fuelled by ascertainment and confirmation bias.

And, most importantly for The Hidden Half, where these excerpts are form, we try to boil all of this down into iron laws. The “noise” that inevitably screws up these simple heuristics are willed away or ignored, to be settled later.

But it’s here where author Michael Blastland really shines – in a plea to embrace the beauty of that which confounds our attempts at simplification.

I’m only a couple of chapters in but it’s already a rollicking ride.

I’ve no desire to dismiss or discourage genuine, careful and humble efforts to understand, and no desire either to knock down robust houses of brick alongside the mansions of straw. It would be easy, but deluded, to see this book as part of an anti-science cynicism that says everything is uncertain, and therefore nothing can be done. I reject that view entirely. On the contrary, I want more robust evidence precisely so that our decisions and actions can be more reliable. I sympathize entirely with how difficult it is to do that well. I applaud those who devote themselves to the problem conscientiously and carefully. This is why we must recognize our limitations, try to understand how they arise, tread more carefully and test what we know vigorously. It was once said that at certain times the world is over-run by false scepticism, but of the true kind there can never be enough. 20 This book aspires to the true kind. The goal is not cynicism; it is to do better.

As always my emphasis.

What are school tests trying to measure?

I’ve just started reading The Lady Tasting Tea, the story of statistics in/and modern science. But one of the early examples has gotten me thinking – how would a scientist go about testing the general intelligence/retained knowledge of a group of students?

Given:

Whatever we measure is really part of a random scatter, whose probabilities are described by a mathematical function, the distribution function.

It seems unlikely a contemporary scientist dropped onto planet B would propose the kind of one-and-done tests that students generally encounter at the end of subjects, semesters, years and school itself.

From the book:

Consider a simple example from the experience of a teacher with a particular student. The teacher is interested in finding some measure of how much the child has learned. To this end, the teacher “experiments” by giving the child a group of tests. Each test is marked on a scale from 0 to 100. Any one test provides a poor estimate of how much the child knows. It may be that the child did not study the few things that were on that test but knows a great deal about things that were not on the test. The child may have had a headache the day she took a particular test. The child may have had an argument with parents the morning of a particular test. For many reasons, one test does not provide a good estimate of knowledge. So, the teacher gives a set of tests. The average score from all those tests is taken as a better estimate of how much the child knows. How much the child knows is the outcome. The scores on individual tests are the data.

I’m quite biased here as I’m absolutely horrid at standardised testing – for a variety of reasons including medical. But it does seem to be yet another aspect of schooling that should be updated given our increasingly sophisticated understanding of the world. Randomness is not to be messed with.

(As usual my emphasis)