Everything is complicated

The impossibility in reading the news and pontificating about public policy is putting yourself in others’ shoes. This isn’t to say we don’t do it. We all do it all the time. We’re just terrible at it.

I touched on this when writing about/reading the Hidden Half. But it’s coming out again as I read a most intriguing book on social science called Everything is Obvious by Duncan Watts.

What I’m talking about is the tendency to try and understand why people behave a certain way. Or to dream up interventions to force them to do so. It almost always involves reasoning through analogy, which is something I’m very guilty of (just read basically anything on this blog).

This kind of reasoning plays into a false notion that behaviour is the predictable outcome of certain inputs. And, as Watts points out, even if this is so, we’re terrible at recognising and weighting all the inputs in our own decisions, let alone everyone else.

Rationalizing human behavior, however, is precisely an exercise in simulating, in our mind’s eye, what it would be like to be the person whose behavior we are trying to understand. Only when we can imagine this simulated version of ourselves responding in the manner of the individual in question do we really feel that we have understood the behavior in question. So effortlessly can we perform this exercise of “understanding by simulation” that it rarely occurs to us to wonder how reliable it is…

…our mental simulations have a tendency to ignore certain types of factors that turn out to be important. The reason is that when we think about how we think, we instinctively emphasize consciously accessible costs and benefits such as those associated with motivations, preferences, and beliefs—the kinds of factors that predominate in social scientists’ models of rationality.

Watts goes on to cite a laundry list of implicit factors that shape our decision making. From defaults to various kinds of priming and faulty memory and reasoning. These biases and gaps are now well known, but it’s interesting to think first about how little we understand about ourselves.

…although it may be true that I like ice cream as a general rule, how much I like it at a particular point in time might vary considerably, depending on the time of day, the weather, how hungry I am, and how good the ice cream is that I expect to get. My decision, moreover, doesn’t depend just on how much I like ice cream, or even just the relation between how much I like it versus how much it costs. It also depends on whether or not I know the location of the nearest ice cream shop, whether or not I have been there before, how much of a rush I’m in, who I’m with and what they want, whether or not I have to go to the bank to get money, where the nearest bank is, whether or not I just saw someone else eating an ice cream, or just heard a song that reminded me of a pleasurable time when I happened to be eating an ice cream, and so on. Even in the simplest situations, the list of factors that might turn out to be relevant can get very long very quickly. And with so many factors to worry about, even very similar situations may differ in subtle ways that turn out to be important. When trying to understand—or better yet predict—individual decisions, how are we to know which of these many factors are the ones to pay attention to, and which can be safely ignored?

Now imagine doing this over a population.

As always my emphasis

Code is fragile

Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.

If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.

Which is partly what makes me pessimistic about things like autonomous cars. Here’s another grab from You look like a thing and I love you by Janelle Shane:

Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.

I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.

Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.

But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.

The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?

All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.

I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.

As always my emphasis

It’s not malice but incompetence that’ll kill us

I’ve been reading You look like a thing and I love you by Janelle Shane. And, honestly, it’s some of the best skewering of Artificial Intelligence I’ve come across. But amid the funny stories of AI incompetence – only recognising sheep when they’re in fields, thinking a goat in a tree is a giraffe etc. – there’s a serious point about the impact of these limitations.

As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences far beyond the merely inconvenient. Recommendation algorithms embedded in YouTube point people toward ever more polarizing content, traveling in a few short clicks from mainstream news to videos by hate groups and conspiracy theorists…

…The algorithms that make decisions about parole, loans, and resume screening are not impartial but can be just as prejudiced as the humans they’re supposed to replace—sometimes even more so. AI-powered surveillance can’t be bribed, but it also can’t raise moral objections to anything it’s asked to do. It can also make mistakes when it’s misused—or even when it’s hacked. Researchers have discovered that something as seemingly insignificant as a small sticker can make an image recognition AI think a gun is a toaster, and a low-security fingerprint reader can be fooled more than 77 percent of the time with a single master fingerprint.

People are generally quick to chalk up to malice what is better explained by incompetence. The righteous outrage in the papers is full of pinstriped fat cats rather than honest mistakes and fallible processes (sometimes rightly so, but probably not as often as portrayed).

It seems our fears for the future fall into this same trap. What truly scares me about AI is generally the outcome of a perfectly running system. Skynet murderbots conquering the world. Or batallions of worker drones tilting the balance further in favour of capital and the technologically competent.

But my recent studies of machine learning, and this book specifically, make we wonder whether the true worry shouldn’t be bias and incompetence. Not a murderous bot. Rather one that just isn’t ready for prime time. In that sense the problem isn’t the technology itself, but the underlying systems that deployed it (I didn’t explicitly write “capitalism” here, but I wouldn’t fault you for reading it in 😇).

If there’s malice in the system, that’s where to find it.

When people think of AI disaster, they think of AIs refusing orders, deciding that their best interests lie in killing all humans, or creating terminator bots. But all those disaster scenarios assume a level of critical thinking and a humanlike understanding of the world that AIs won’t be capable of for the foreseeable future. As leading machine learning researcher Andrew Ng put it, worrying about an AI takeover is like worrying about overcrowding on Mars.

One of Shane’s principles of “AI weirdness” is that it does not really understand the problem you want it to solve. Another is that it will take the path of least resistance to achieve what you tell it to.

When you’re using AI to play a game this can lead to some obnoxious cheating. When you’re deploying it in the real world this can result in further entrenching bias, hierarchy and revealed preference. Often in ways we don’t understand and expect. Particularly if the implementors aren’t aware of the underlying bias in the system they are a part of.

The problem with designing an AI to screen candidates for us: we aren’t really asking the AI to identify the best candidates. We’re asking it to identify the candidates that most resemble the ones our human hiring managers liked in the past. That might be okay if the human hiring managers made great decisions. But most US companies have a diversity problem, particularly among managers and particularly in the way that hiring managers evaluate resumes and interview candidates. All else being equal, resumes with white-male-sounding names are more likely to get interviews than those with female-and/ or minority-sounding names. 5 Even hiring managers who are female and/ or members of a minority themselves tend to unconsciously favor white male candidates. Plenty of bad and/ or outright harmful AI programs are designed by people who thought they were designing an AI to solve a problem but were unknowingly training it to do something entirely different.

As always my emphasis

What are the truly difficult questions?

A couple of years ago I went on a three day trek that all but shattered my love of hikes. It was a hilly circuit, slippery and hot. But on the third day, as we slowly ran out of lollies, food, patience and even water; we were repeatedly greeted by false summits.

The end of the trail felt like a bone on the end of a stick. Meant to trick a hungry dog. This is the image that came to mind as I read this next grab from The Book of Why by Judea Pearl.

The successes of deep learning have been truly remarkable and have caught many of us by surprise. Nevertheless, deep learning has succeeded primarily by showing that certain questions or tasks we thought were difficult are in fact not. It has not addressed the truly difficult questions that continue to prevent us from achieving humanlike AI.

Artificial Intelligence has famously had a few “winters”, as what seemed like fundamental breakthroughs petered out. Similarly, the list of “transformational” technologies that failed to make a real dent is incredibly long.

For the non technical among us it can be very easy to mistake these kinds of false summits for fundamental transformation. Especially as they often do represent some progress. Finding themselves in products that we actually use or glimpse on breakfast television etc.

Part of the problem is framing. Especially as the incentive for so many is to hype. But it’s also a focus on outcomes rather than process.

The torture of that trek came from us focusing on the end rather than the journey. We lost track of the scenery, fresh air and each other.

We get tricked by technological false summits in the same way. By focusing so intently on what the technology can do. But the real power comes from looking at the process. Both the roadblocks and potential from questioning what the difficult questions are.

As always my emphasis

Counterfactuals and human progress

The history of human progress is often viewed through significant events, movements and achievements. But what if we look at it as a story of how we think about the world?

Counterfactuals are the building blocks of moral behavior as well as scientific thought. The ability to reflect on one’s past actions and envision alternative scenarios is the basis of free will and social responsibility. The algorithmization of counterfactuals invites thinking machines to benefit from this ability and participate in this (until now) uniquely human way of thinking about the world.

I came across these sections in the early parts of The Book of Why by Judea Pearl. It’s a book on the science of causality. Here he’s exploring some of the differences between humans and the current crop of “thinking machines” – artificial intelligence and other algorithmic “learning” from data.

I’ve always loved counterfactuals as a rhetorical device, sort of a stripped down model. We can play with what if’s and construct something together. Try to tease out causality and significance, however elementary.

But it’s interesting to consider this as a driver of human evolution. As a methodology for iteration that doesn’t appear to be possessed by other animals or modern computer models.

Within 10,000 years after the Lion Man’s creation, all other hominids (except for the very geographically isolated Flores hominids) had become extinct. And humans have continued to change the natural world with incredible speed, using our imagination to survive, adapt, and ultimately take over. The advantage we gained from imagining counterfactuals was the same then as it is today: flexibility, the ability to reflect and improve on past actions, and, perhaps even more significant, our willingness to take responsibility for past and current actions

Many of us have a tendency to favour the concrete and eschew the hypothetical. But risk aversion, imagination and learning from experience – among other drivers of progress – by necessity recognise the possibility of other outcomes. They are built on counterfactual thinking.

And, maybe more interestingly when we compare human intelligence to that which we try to create, it may be somewhat innate?

…Counterfactual reasoning, which deals with what-ifs, might strike some readers as unscientific. Indeed, empirical observation can never confirm or refute the answers to such questions. Yet our minds make very reliable and reproducible judgments all the time about what might be or might have been. We all understand, for instance, that had the rooster been silent this morning, the sun would have risen just as well. This consensus stems from the fact that counterfactuals are not products of whimsy but reflect the very structure of our world model.

As always my emphasis