The impossibility in reading the news and pontificating about public policy is putting yourself in others’ shoes. This isn’t to say we don’t do it. We all do it all the time. We’re just terrible at it.
What I’m talking about is the tendency to try and understand why people behave a certain way. Or to dream up interventions to force them to do so. It almost always involves reasoning through analogy, which is something I’m very guilty of (just read basically anything on this blog).
This kind of reasoning plays into a false notion that behaviour is the predictable outcome of certain inputs. And, as Watts points out, even if this is so, we’re terrible at recognising and weighting all the inputs in our own decisions, let alone everyone else.
Rationalizing human behavior, however, is precisely an exercise in simulating, in our mind’s eye, what it would be like to be the person whose behavior we are trying to understand. Only when we can imagine this simulated version of ourselves responding in the manner of the individual in question do we really feel that we have understood the behavior in question. So effortlessly can we perform this exercise of “understanding by simulation” that it rarely occurs to us to wonder how reliable it is…
…our mental simulations have a tendency to ignore certain types of factors that turn out to be important. The reason is that when we think about how we think, we instinctively emphasize consciously accessible costs and benefits such as those associated with motivations, preferences, and beliefs—the kinds of factors that predominate in social scientists’ models of rationality.
Watts goes on to cite a laundry list of implicit factors that shape our decision making. From defaults to various kinds of priming and faulty memory and reasoning. These biases and gaps are now well known, but it’s interesting to think first about how little we understand about ourselves.
…although it may be true that I like ice cream as a general rule, how much I like it at a particular point in time might vary considerably, depending on the time of day, the weather, how hungry I am, and how good the ice cream is that I expect to get. My decision, moreover, doesn’t depend just on how much I like ice cream, or even just the relation between how much I like it versus how much it costs. It also depends on whether or not I know the location of the nearest ice cream shop, whether or not I have been there before, how much of a rush I’m in, who I’m with and what they want, whether or not I have to go to the bank to get money, where the nearest bank is, whether or not I just saw someone else eating an ice cream, or just heard a song that reminded me of a pleasurable time when I happened to be eating an ice cream, and so on. Even in the simplest situations, the list of factors that might turn out to be relevant can get very long very quickly. And with so many factors to worry about, even very similar situations may differ in subtle ways that turn out to be important. When trying to understand—or better yet predict—individual decisions, how are we to know which of these many factors are the ones to pay attention to, and which can be safely ignored?
Now imagine doing this over a population.
As always my emphasis
Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.
If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.
Which is partly what makes me pessimistic about things like autonomous cars. Here’s another grab from You look like a thing and I love you by Janelle Shane:
Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.
I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.
Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.
But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.
The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?
All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.
I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.
As always my emphasis
I’ve been reading You look like a thing and I love you by Janelle Shane. And, honestly, it’s some of the best skewering of Artificial Intelligence I’ve come across. But amid the funny stories of AI incompetence – only recognising sheep when they’re in fields, thinking a goat in a tree is a giraffe etc. – there’s a serious point about the impact of these limitations.
As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences far beyond the merely inconvenient. Recommendation algorithms embedded in YouTube point people toward ever more polarizing content, traveling in a few short clicks from mainstream news to videos by hate groups and conspiracy theorists…
…The algorithms that make decisions about parole, loans, and resume screening are not impartial but can be just as prejudiced as the humans they’re supposed to replace—sometimes even more so. AI-powered surveillance can’t be bribed, but it also can’t raise moral objections to anything it’s asked to do. It can also make mistakes when it’s misused—or even when it’s hacked. Researchers have discovered that something as seemingly insignificant as a small sticker can make an image recognition AI think a gun is a toaster, and a low-security fingerprint reader can be fooled more than 77 percent of the time with a single master fingerprint.
People are generally quick to chalk up to malice what is better explained by incompetence. The righteous outrage in the papers is full of pinstriped fat cats rather than honest mistakes and fallible processes (sometimes rightly so, but probably not as often as portrayed).
It seems our fears for the future fall into this same trap. What truly scares me about AI is generally the outcome of a perfectly running system. Skynet murderbots conquering the world. Or batallions of worker drones tilting the balance further in favour of capital and the technologically competent.
But my recent studies of machine learning, and this book specifically, make we wonder whether the true worry shouldn’t be bias and incompetence. Not a murderous bot. Rather one that just isn’t ready for prime time. In that sense the problem isn’t the technology itself, but the underlying systems that deployed it (I didn’t explicitly write “capitalism” here, but I wouldn’t fault you for reading it in 😇).
If there’s malice in the system, that’s where to find it.
When people think of AI disaster, they think of AIs refusing orders, deciding that their best interests lie in killing all humans, or creating terminator bots. But all those disaster scenarios assume a level of critical thinking and a humanlike understanding of the world that AIs won’t be capable of for the foreseeable future. As leading machine learning researcher Andrew Ng put it, worrying about an AI takeover is like worrying about overcrowding on Mars.
One of Shane’s principles of “AI weirdness” is that it does not really understand the problem you want it to solve. Another is that it will take the path of least resistance to achieve what you tell it to.
When you’re using AI to play a game this can lead to some obnoxious cheating. When you’re deploying it in the real world this can result in further entrenching bias, hierarchy and revealed preference. Often in ways we don’t understand and expect. Particularly if the implementors aren’t aware of the underlying bias in the system they are a part of.
The problem with designing an AI to screen candidates for us: we aren’t really asking the AI to identify the best candidates. We’re asking it to identify the candidates that most resemble the ones our human hiring managers liked in the past. That might be okay if the human hiring managers made great decisions. But most US companies have a diversity problem, particularly among managers and particularly in the way that hiring managers evaluate resumes and interview candidates. All else being equal, resumes with white-male-sounding names are more likely to get interviews than those with female-and/ or minority-sounding names. 5 Even hiring managers who are female and/ or members of a minority themselves tend to unconsciously favor white male candidates. Plenty of bad and/ or outright harmful AI programs are designed by people who thought they were designing an AI to solve a problem but were unknowingly training it to do something entirely different.
As always my emphasis
- Why our declining biblical literacy matters
- How a social network could save democracy from deadlock
- The bizarre social history of bed
- Machine Learning: An Applied Econometric Approach
- The Trouble with Journalism
- Crisis of Policy Reporting: Evidence From Australian Election Campaigns
- “Good” isn’t good enough
- What mormon family trees tell us about cancer
- Of a Disciplined Society
- Mathematicians Decode the Surprising Complexity of Cow Herds
- How the village feast paved the way to empires and economics
I’m not really someone whose daily activities fit neatly in the guidelines. You’re probably the same. I was standing in line at the bank yesterday with a woman who is.
She needed to withdraw money. But just too much money to do it from an ATM. The bank, as is its wont, has invested in a battery of smart ATMs at the front of the branch. It now only staffs one of the six counters inside. The ATMs can do basically anything you would need of a teller – withdraw, move and deposit money etc.
But the ATMs have hard limits and absolutely no room for discretion.
The long line of disgruntled customers waiting to see that lonely bank teller attest to how many of us needed the flexibility of a human. Something, someone, who isn’t absolutely constrained by rules. Except, does this describe any of us anymore?
…since Economic Man is incapable of being morally load-bearing, he cannot be trusted. He will only work if incentivized by material benefit, so his behaviour must be watched like a hawk, and his rewards linked to the observed performance of contract-specified actions… Britain’s employers have been taught this in business schools and the consequence is manifest in the annual Jobs and Skills Survey. Twenty-five years ago, most people said they had enough autonomy to do their job properly; that has since dropped by 40 per cent. The reduction of workers to automata has resulted in a massive loss of job satisfaction, and with it of intrinsic motivation: it is hard to be loyal to an organization that manifestly distrusts you. It has also forfeited the good judgement that comes from using tacit knowledge – the expertise that can only be acquired through experience. By definition, this cannot be codified, specified, monitored and incentivized.
The metrification of daily life is well under way and is already visible in a reduction to box ticking. There is some discretion left in the world, but I have little doubt that the necessary surveillance and analytical technology will grow in leaps and bounds. The MBAs demand it.
So what does that mean for those of us who live a deviant life? Who need to withdraw just too much money, or otherwise travel outside the norm? Probably a lot more time on the phone, talking to similarly constrained call centre workers.
As always my emphasis