The impossibility in reading the news and pontificating about public policy is putting yourself in others’ shoes. This isn’t to say we don’t do it. We all do it all the time. We’re just terrible at it.
What I’m talking about is the tendency to try and understand why people behave a certain way. Or to dream up interventions to force them to do so. It almost always involves reasoning through analogy, which is something I’m very guilty of (just read basically anything on this blog).
This kind of reasoning plays into a false notion that behaviour is the predictable outcome of certain inputs. And, as Watts points out, even if this is so, we’re terrible at recognising and weighting all the inputs in our own decisions, let alone everyone else.
Rationalizing human behavior, however, is precisely an exercise in simulating, in our mind’s eye, what it would be like to be the person whose behavior we are trying to understand. Only when we can imagine this simulated version of ourselves responding in the manner of the individual in question do we really feel that we have understood the behavior in question. So effortlessly can we perform this exercise of “understanding by simulation” that it rarely occurs to us to wonder how reliable it is…
…our mental simulations have a tendency to ignore certain types of factors that turn out to be important. The reason is that when we think about how we think, we instinctively emphasize consciously accessible costs and benefits such as those associated with motivations, preferences, and beliefs—the kinds of factors that predominate in social scientists’ models of rationality.
Watts goes on to cite a laundry list of implicit factors that shape our decision making. From defaults to various kinds of priming and faulty memory and reasoning. These biases and gaps are now well known, but it’s interesting to think first about how little we understand about ourselves.
…although it may be true that I like ice cream as a general rule, how much I like it at a particular point in time might vary considerably, depending on the time of day, the weather, how hungry I am, and how good the ice cream is that I expect to get. My decision, moreover, doesn’t depend just on how much I like ice cream, or even just the relation between how much I like it versus how much it costs. It also depends on whether or not I know the location of the nearest ice cream shop, whether or not I have been there before, how much of a rush I’m in, who I’m with and what they want, whether or not I have to go to the bank to get money, where the nearest bank is, whether or not I just saw someone else eating an ice cream, or just heard a song that reminded me of a pleasurable time when I happened to be eating an ice cream, and so on. Even in the simplest situations, the list of factors that might turn out to be relevant can get very long very quickly. And with so many factors to worry about, even very similar situations may differ in subtle ways that turn out to be important. When trying to understand—or better yet predict—individual decisions, how are we to know which of these many factors are the ones to pay attention to, and which can be safely ignored?
Now imagine doing this over a population.
As always my emphasis