A plea for more humility about what we ‘“know”

…we can’t help turning up our pattern-making instinct to 11–when life offers only a 5. Too often, we make bold claims about big forces with law-like effects, but with culpable overconfidence that leads us to waste time, money, talent and energy, and detract from real progress… I’d like our claims to be more proportionate to the awkwardness of the task. Every new generation needs reminding of the overconfidence of every previous generation, of how much there is still to know and do, and, above all, how resistant the raw materials of life can be.

Reading books like Thinking In Bets, The Lady Tasting Tea and The Drunkards Walk, it’s hard not to be thoroughly disaffected with the deterministic model of the universe most of us carry in our heads.

Green tea causes weight loss, your aunt tells you. You should try get into that school cause it’s the best, they say.

In fact, it’s tempting to draw this back to school, where we’re taught to find the right answer, not the best approximation of one. Confounding, selection, randomness and the dozens of other thorns in simple causation aren’t even really hinted at.

It’s like a civilisation-wide Dunning-Kruger effect. We engage in pattern matching, fuelled by ascertainment and confirmation bias.

And, most importantly for The Hidden Half, where these excerpts are form, we try to boil all of this down into iron laws. The “noise” that inevitably screws up these simple heuristics are willed away or ignored, to be settled later.

But it’s here where author Michael Blastland really shines – in a plea to embrace the beauty of that which confounds our attempts at simplification.

I’m only a couple of chapters in but it’s already a rollicking ride.

I’ve no desire to dismiss or discourage genuine, careful and humble efforts to understand, and no desire either to knock down robust houses of brick alongside the mansions of straw. It would be easy, but deluded, to see this book as part of an anti-science cynicism that says everything is uncertain, and therefore nothing can be done. I reject that view entirely. On the contrary, I want more robust evidence precisely so that our decisions and actions can be more reliable. I sympathize entirely with how difficult it is to do that well. I applaud those who devote themselves to the problem conscientiously and carefully. This is why we must recognize our limitations, try to understand how they arise, tread more carefully and test what we know vigorously. It was once said that at certain times the world is over-run by false scepticism, but of the true kind there can never be enough. 20 This book aspires to the true kind. The goal is not cynicism; it is to do better.

As always my emphasis.

If you’re going to change the world, you must reflect it first

I find taking public transport or hopping a plane immensely stressful. Not because of the shoddy infrastructure, waiting around, or poor service. Because I’m 6″4 with disproportionately long legs in a world built by people who aren’t.

As I continue to read Coders, I’m increasingly worried how this same phenomena will play out in a world full of algorithmic black boxes. Code so complex and systems so arcane that even their creators struggle to understand them.

Techies love to talk about scale and putting their creations in front of millions. But for this to work they themselves need to be drawn from a representative pool.

Otherwise you get self driving cars that are more likely to hit black people. Or image recognition that thinks black People are gorillas.

…then Alciné scrolled over to a picture of himself and a friend, in a selfie they’d taken at an outdoor concert: She looms close in the view, while he’s peering, smiling, over her right shoulder. Alciné is African American, and so is his friend. And the label that Google Photos had generated? “Gorillas.” It wasn’t just that single photo, either. Over fifty snapshots of the two from that day had been identified as “gorillas.”

This isn’t only a Google problem. Or even a Silicon Valley problem. There are also stories of algorithms trained in China and South Korea that have trouble recognising Caucasian faces.

As a journalist with a diverse ethnic and cultural background I had trouble understanding why my editors took so much convincing to run foreign stories. With a family spread around the globe, I could see myself in the Rohingya as much as an Australian farmer.

These issues are linked – what we value, notice and think of as “normal” are all informed by our personal stories. If you grow up or work in a monoculture, that will influence the issues you see, the solutions you propose and contingencies you plan for.

But the world isn’t a monoculture. There are 6″4 people who would like to ride the bus. There will be people who aren’t like you but need to cross the street safely, or be judged fairly.

Who will be deeply offended by racial epithets, which are themselves linked to why they aren’t represented in a database.

If you’re going to try and change the world for the better, you need to be of the world. There will always be edge cases, but without diversity they will be systemic. They will be disastrous.

…why couldn’t Google’s AI recognize an African American face? Very likely because it hadn’t been trained on enough of them. Most data sets of photos that coders in the West use for training face-recognition are heavily white, so the neural nets easily learn to make nuanced recognitions of white people—but they only develop a hazy sense of what black people look like.

As always my emphasis.

Tab dump

Interesting research, articles and videos in no particular order.

 

Why techies think they can change the world

People who excel at programming, notes the coder and tech-culture critic Maciej Cegłowski, often “become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

This is from Coders, a book I only just downloaded but am absolutely tearing through.

The subtitle is “how software programmers think, and how their thinking is changing our world”, which is a clue to what Ceglowski is referring to.

When you’re writing code you’re trying to break a process down, to first principles and then into easy steps as you go along.

You build it back up in an environment over which you have a huge amount of control, that thrives on trial, error and iteration.

Where something usually either works or breaks obviously. Everything is very structured and built upon logic.

But by this point you’ve also abstracted so much you can trick yourself into thinking you’ve mastered all the nuances, not just how to get from A to B.

It’s also an alluring way of thinking, which you begin applying to other problems in your life. In a similar way to how you can start thinking in another language if you are sufficiently steeped in it.

This is a fantastic book so far. Hope to post some more.

As always my emphasis.

Subscribe!

Subscribe for a weekly digest of new posts.

You have Successfully Subscribed!