Code is fragile

Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.

If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.

Which is partly what makes me pessimistic about things like autonomous cars. Here’s another grab from You look like a thing and I love you by Janelle Shane:

Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.

I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.

Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.

But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.

The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?

All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.

I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.

As always my emphasis

If you’re going to change the world, you must reflect it first

I find taking public transport or hopping a plane immensely stressful. Not because of the shoddy infrastructure, waiting around, or poor service. Because I’m 6″4 with disproportionately long legs in a world built by people who aren’t.

As I continue to read Coders, I’m increasingly worried how this same phenomena will play out in a world full of algorithmic black boxes. Code so complex and systems so arcane that even their creators struggle to understand them.

Techies love to talk about scale and putting their creations in front of millions. But for this to work they themselves need to be drawn from a representative pool.

Otherwise you get self driving cars that are more likely to hit black people. Or image recognition that thinks black People are gorillas.

…then Alciné scrolled over to a picture of himself and a friend, in a selfie they’d taken at an outdoor concert: She looms close in the view, while he’s peering, smiling, over her right shoulder. Alciné is African American, and so is his friend. And the label that Google Photos had generated? “Gorillas.” It wasn’t just that single photo, either. Over fifty snapshots of the two from that day had been identified as “gorillas.”

This isn’t only a Google problem. Or even a Silicon Valley problem. There are also stories of algorithms trained in China and South Korea that have trouble recognising Caucasian faces.

As a journalist with a diverse ethnic and cultural background I had trouble understanding why my editors took so much convincing to run foreign stories. With a family spread around the globe, I could see myself in the Rohingya as much as an Australian farmer.

These issues are linked – what we value, notice and think of as “normal” are all informed by our personal stories. If you grow up or work in a monoculture, that will influence the issues you see, the solutions you propose and contingencies you plan for.

But the world isn’t a monoculture. There are 6″4 people who would like to ride the bus. There will be people who aren’t like you but need to cross the street safely, or be judged fairly.

Who will be deeply offended by racial epithets, which are themselves linked to why they aren’t represented in a database.

If you’re going to try and change the world for the better, you need to be of the world. There will always be edge cases, but without diversity they will be systemic. They will be disastrous.

…why couldn’t Google’s AI recognize an African American face? Very likely because it hadn’t been trained on enough of them. Most data sets of photos that coders in the West use for training face-recognition are heavily white, so the neural nets easily learn to make nuanced recognitions of white people—but they only develop a hazy sense of what black people look like.

As always my emphasis.

Why techies think they can change the world

People who excel at programming, notes the coder and tech-culture critic Maciej Cegłowski, often “become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

This is from Coders, a book I only just downloaded but am absolutely tearing through.

The subtitle is “how software programmers think, and how their thinking is changing our world”, which is a clue to what Ceglowski is referring to.

When you’re writing code you’re trying to break a process down, to first principles and then into easy steps as you go along.

You build it back up in an environment over which you have a huge amount of control, that thrives on trial, error and iteration.

Where something usually either works or breaks obviously. Everything is very structured and built upon logic.

But by this point you’ve also abstracted so much you can trick yourself into thinking you’ve mastered all the nuances, not just how to get from A to B.

It’s also an alluring way of thinking, which you begin applying to other problems in your life. In a similar way to how you can start thinking in another language if you are sufficiently steeped in it.

This is a fantastic book so far. Hope to post some more.

As always my emphasis.

The bots are already upon us

I finally reached a personal milestone this week and launched my own Twitterbot. It’s quite a simple bot, using a couple of Python libraries and guidance from Hannah Shaw to construct random sentences from a copy of A Tale of Two Cities.

But as I looked around at bots, trying to figure out what I might do as a coding challenge, I was stunned by the incredible creativity and use to which they have been put. They really show how powerful even small bits of logic can be.

There are so many examples inane bots tweeting as the hours strike or every line of Shakespeare (on it’s fifth go round apparently). And, of course, cats. You’ve also got the more nefarious kinds spreading disinformation or spam.

But then you’ve got bots digging into the wonderful archives of the National Library of Australia, surfacing newspapers from decades ago. And weird performance art (is it performance art when your code does the performing?) that people interact with.

Looking at the source code, I’ve yet to find one that is more than a couple of hundred lines long, and most seem a lot shorter than that. Bots are often spoken about in catacylsmic ways, but also as an abstract idea that hasn’t really come.

But here we have bots inserting themselves into, and augmenting, many peoples’ daily life. Though simple, they provide joy, distraction, interaction and even community.

Check out this Mary Queen of Scots bot. From an old article:

Besides fellow Catholic history nerds and scholars of the period, Queen Mary has attracted a fairly staggering audience among Scottish separatists, especially given the coming Independence Referendum in September. “Thanks to the astronomical rise of the Scottish National Party, anything against England or English policies usually garners massive support,” she says. “My Scottish Nationalist followers absolutely eat anything anti-English with a spoon. It’s a strange mixture of wonderful and frightening to see history take shape in that way.”

 
But easily my favourite bot is Every3Minutes. It tweets every three minutes to remind us that a person was sold every three minutes in the American South between 1820 and 1860.

Both a profound and devastating thing to be reminded of in a way that only machines can – regularly and persistently.