One of the most interesting observations in Radical Markets is that artificial intelligence is (at least currently) beholden to the labour of humans. AI requires us to produce, and even process and mark up, the data used to train them:

…AIs are not actually the free-standing replacement for human labor they appear to be. They are trained with and learn from human data. Thus AI, just as much as fields or factories, offers a critical role for ordinary human labor—as suppliers of data, or what we will call data as labor. Failing to recognize data as labor could thus create what Lanier calls “fake unemployment,” where jobs dry up not because humans are not useful but because the valuable inputs they supply are treated as by-products of entertainment rather than as socially valued work.

This questions the fear that humans are bound to be obsoleted, or even completely displaced, by computers. But a couple of chapters in The creativity code by Marcus Du Sautoy makes me wonder how else humans and computers will work together.

Chess players have been teaming up with computers for a while. But the stories Du Sautoy tells of AlphaGo are something else. AlphaGo was famously the first computer program to beat a professional at the board game Go.

Go is thousands of years old and was thought too complex for computers to master. But not only did AlphaGo win, many of its moves shocked its opponent, commentators and spectators. Some were downright bizarre, mocked, and even described as “alien“.

AlphaGo had taught itself to play Go. And, as its moves were analysed, it has also taught the rest of us:

AlphaGo had taught the world a new way to play an ancient game. Analysis since the match has resulted in new tactics. The fifth line is now played early on, as we have come to understand that it can have big implications for the endgame. AlphaGo has gone on to discover still more innovative strategies. DeepMind revealed at the beginning of 2017 that its latest iteration had played online anonymously against a range of top-ranking professionals…Those games are now regarded as a treasure trove of new ideas.

I really like the “alien” description. Because while AlphaGo learned from databases of tens of thousands of Go games. It also played millions of games against itself. Where, through trial and error, it came up with iterations we hadn’t seen before.

The trouble with modern Go is that conventions had built up about ways to play… But by breaking those conventions AlphaGo had cleared the fog and revealed an even higher peak…

Because the humans had also been trained on a store of games past. By playing against itself, AlphaGo broke out of those ruts. And a version of the algorithm that wasn’t trained on past games at all became even more bizarre and unbeatable.

Just think of the possibilities when taken out of the narrow problem of games.

DeepMind now has an even better algorithm that can thrash the original version of AlphaGo. This algorithm circumvented the need to be shown how humans play the game… It was no longer constrained by the way humans think and play. Within three days of training, in which time it played 4.9 million games against itself, it was able to beat by 100 games to nil the version of AlphaGo that had defeated Lee Sedol. What took humans 3000 years to achieve, it did in three days. By day forty it was unbeatable.

As always my emphasis