Code is fragile

Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.

If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.

Which is partly what makes me pessimistic about things like autonomous cars. Here’s another grab from You look like a thing and I love you by Janelle Shane:

Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.

I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.

Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.

But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.

The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?

All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.

I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.

As always my emphasis