As I slowly wrap–up The creativity code by Marcus Du Sautoy, this paragraph makes me consider how our training and experience shape, and in some sense even limit, our world:
Various attempts at learning jazz have taught me that there is a puzzle element to a good improvisation. Generally a jazz standard has a set of chords that change over the course of a piece. The task of the trumpeter is to trace a line that fits the chords as they change. But your choice also has to make sense from note to note, so playing jazz is really like tracing a line through a two-dimensional maze. The chords determine the permissible moves vertically, and what you’ve just played determines the moves horizontally. As jazz gets freer, the actual chord progressions become more fluid and you have to be sensitive to your pianist’s possible next move, which will again be determined by the chords played to date. A good improviser listens and knows where the pianist is likely to head next.
Of course this is by a mathematician and mathematics is a subject of the book. And this grab comes amid an exploration of music and algorithms. But the explicit mathematical digression in the midst of this musical romp sticks out.
I think it’s because I do this all the time (I’m pretty sure we all do). Just like Du Sautoy pulls music through his mathematical lens (and vice versa), we are constantly filtering and analogising. It shapes our world.
As a journalist I have a hard time ignoring the decisions made in stories. Wondering at other angles or how the medium itself (text, audio, video etc.) inherently limits choices.
In that sense my experience has me constantly stuck in the role of participant. Viewing through a lens of construction rather than strict consumption. I consume stories as one of a number of options, as a version rather than a totality.
Perhaps we all do this when consuming news. But I’ve made these exact decisions thousands of times. It’s hard not to envision the dirty carpet of the newsroom, the white walls above my desk, the mild panic as deadline approaches. To wonder how the availability of talent, the domain knowledge of the reporter, any number of other factors; pushed and pulled on what’s before me.
Similarly with economics. After university I have comparative advantage and opportunity cost tattooed on my brain. I’m constantly searching for impacts at the margins. I reason under a cloud of ceteris paribus.
Your training, what you do every day, equips you with easy heuristics. But it can slowly carve grooves in your thinking. You mustn’t let it control where you end up.
The trick is to be aware of it, and, hopefully, leverage it as Du Sautoy has. For greater understanding. Not to be sucked into thinking this is all there is.
As always my emphasis
One of the most interesting observations in Radical Markets is that artificial intelligence is (at least currently) beholden to the labour of humans. AI requires us to produce, and even process and mark up, the data used to train them:
…AIs are not actually the free-standing replacement for human labor they appear to be. They are trained with and learn from human data. Thus AI, just as much as fields or factories, offers a critical role for ordinary human labor—as suppliers of data, or what we will call data as labor. Failing to recognize data as labor could thus create what Lanier calls “fake unemployment,” where jobs dry up not because humans are not useful but because the valuable inputs they supply are treated as by-products of entertainment rather than as socially valued work.
This questions the fear that humans are bound to be obsoleted, or even completely displaced, by computers. But a couple of chapters in The creativity code by Marcus Du Sautoy makes me wonder how else humans and computers will work together.
Chess players have been teaming up with computers for a while. But the stories Du Sautoy tells of AlphaGo are something else. AlphaGo was famously the first computer program to beat a professional at the board game Go.
Go is thousands of years old and was thought too complex for computers to master. But not only did AlphaGo win, many of its moves shocked its opponent, commentators and spectators. Some were downright bizarre, mocked, and even described as “alien“.
AlphaGo had taught itself to play Go. And, as its moves were analysed, it has also taught the rest of us:
AlphaGo had taught the world a new way to play an ancient game. Analysis since the match has resulted in new tactics. The fifth line is now played early on, as we have come to understand that it can have big implications for the endgame. AlphaGo has gone on to discover still more innovative strategies. DeepMind revealed at the beginning of 2017 that its latest iteration had played online anonymously against a range of top-ranking professionals…Those games are now regarded as a treasure trove of new ideas.
I really like the “alien” description. Because while AlphaGo learned from databases of tens of thousands of Go games. It also played millions of games against itself. Where, through trial and error, it came up with iterations we hadn’t seen before.
The trouble with modern Go is that conventions had built up about ways to play… But by breaking those conventions AlphaGo had cleared the fog and revealed an even higher peak…
Because the humans had also been trained on a store of games past. By playing against itself, AlphaGo broke out of those ruts. And a version of the algorithm that wasn’t trained on past games at all became even more bizarre and unbeatable.
Just think of the possibilities when taken out of the narrow problem of games.
DeepMind now has an even better algorithm that can thrash the original version of AlphaGo. This algorithm circumvented the need to be shown how humans play the game… It was no longer constrained by the way humans think and play. Within three days of training, in which time it played 4.9 million games against itself, it was able to beat by 100 games to nil the version of AlphaGo that had defeated Lee Sedol. What took humans 3000 years to achieve, it did in three days. By day forty it was unbeatable.
As always my emphasis
I’m re-reading Radical Markets by Eric Posner and Glen Weyl for a project I’m working on. And I have been struck by this in a new foreward by Vitalik Buterin and Jaron Lanier:
It is important to understand the proposed radical markets in the spirit in which they are offered. A mechanism cannot become the center of human civilization but must serve as a tool in the context of civilization. That point should be implicit, since many methods are proposed here; obviously, the intent is not to make any one of them singularly dominant.
If you didn’t catch the Radical Markets hype a year ago, it’s essentially an argument to inject market mechanisms, and our greater knowledge of how markets work, into more aspects of society. Such as by introducing quadratic voting in elections to better measure intensity in preferences.
Or, more spectacularly, perpetually auctioning the use rights of private property. Essentially, allowing anyone to go and bid on any property at any time. The idea being (among other things) to better capture the value of these assets in the tax system.
There are problems with some of these ideas. Initially at least. Such as that without initial capital reallocation, the removal of certain property rights could lead to more inequality. Or that many people desperately need the stability afforded them by property ownership.
But I really like this argument that we need to consider the idea within context rather than turning it into a straw man. We need to think of them as mechanisms that could be strategically employed, within an established political economy.
Especially as our discourse has sped up, as more diverse voices added, it is tempting to summarily dismiss new ideas because they aren’t perfect. Because there are negative consequences that hadn’t been considered.
I like Ryan Avent’s reading of Radical Markets as a work of political philosophy. On the need to consider radical solutions to big problems. As an appeal to consider market mechanisms in areas that we currently don’t.
Again from Buterin and Jaron Lanier:
Abstractions can play out differently according to context. Any mechanism can be turned into an instrument of violence if it is overly amplified and drowns out every other process in a society. And yet without better mechanisms, we are doomed to stumble around, failing to address the complexities of our times.
None of the rules in society are natural. They’ve all been contrived. The questions are why and by whom. Injecting markets into more areas of society may make sense. Or maybe the inverse.
New ideas may not suit as panaceas, but do right in specific situations or after some adaptation. Perhaps it’s time for something more targeted.
As always my emphasis
Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.
If you’re lucky (and have prepared ahead of time) it might throw out an error message. But mostly it sits and sulks until whatever exception to the rules you’ve given it has been fixed.
Which is partly what makes me pessimistic about things like autonomous cars. Here’s another grab from You look like a thing and I love you by Janelle Shane:
Our world is too complicated, too unexpected, too bizarre for an AI to have seen it all during training. The emus will get loose, the kids will start wearing cockroach costumes, and people will ask about giraffes even when there aren’t any present. AI will misunderstand us because it lacks the context to know what we really want it to do.
I now have several scripts running every day, peppered with code asking it to pretty-please keep going if something goes wrong. It’s a tangled web of counterfactual logic, mostly dreamed up after something actually has gone wrong. Most days it makes it. But often it doesn’t.
Of course autonomous cars aren’t as bad as my hard coded logic. Part of the point of machine learning is precisely to avoid having to come up with all the steps and ass-covering required to make code tackle a complex and multifaceted problem.
But we’ve now seen so many cases where it just doesn’t work. Because the same problems apply when it comes to training the algorithms.
The real world is so much more wild and malleable than the relatively safe cyberspace my code calls home. The people tackling these problems are obviously far smarter and more experienced than me, but is that enough?
All sorts of things could change and mess with an AI. As I mentioned in an earlier chapter, road closures or even hazards like wildfires might not deter an AI that sees only traffic from recommending what it thinks is an attractive route. Or a new kind of scooter could become popular, throwing off the hazard-detection algorithm of a self-driving car. A changing world adds to the challenge of designing an algorithm to understand it.
I suspect this post will be outdated incredibly fast. But it’s also likely that our wildest technological dreams will be achieved less by computers being “smarter” and more through narrowing the problem. Making the world safer. Because code is fragile.
As always my emphasis