Dealing with climate change has always felt like a slog. Like we need to take our medicine in order to fight off calamity. In some respects this is correct, especially for countries without access to a lot of low-emissions power.

As I slowly wrap–up The creativity code by Marcus Du Sautoy, this paragraph makes me consider how our training and experience shape, and in some sense even limit, our world:

If you have thought seriously about inequality or capitalism then the thesis of The code of capital by Katharina Pistor is not going to be too shocking. Still, it’s one of the best articulated explanations for wealth and inequality I’ve seen.

One of the most interesting observations in Radical Markets is that artificial intelligence is (at least currently) beholden to the labour of humans. AI requires us to produce, and even process and mark up, the data used to train them:

I’m re-reading Radical Markets by Eric Posner and Glen Weyl for a project I’m working on. And I have been struck by this in a new foreward by Vitalik Buterin and Jaron Lanier:

One of the laziest conventions in journalism is the daily stock market report. Something went up or down. And, even though it only just happened, the reporter confidently tells us exactly why.

One of the most galling things about politics and policy is how ancillary “facts” and logic can be. The internet has lowered the bar to information and expertise to almost nothing, but has devalued them as well.

The impossibility in reading the news and pontificating about public policy is putting yourself in others’ shoes. This isn’t to say we don’t do it. We all do it all the time. We’re just terrible at it.

Something I hadn’t expected to learn this year was that computer code spits the dummy over the slightest thing. Given a slight change, the barest deviation from what a script was expecting, the whole thing shuts down.

I’ve been reading You look like a thing and I love you by Janelle Shane. And, honestly, it’s some of the best skewering of Artificial Intelligence I’ve come across. But amid the funny stories of AI incompetence – only recognising sheep when they’re in fields, thinking a goat in a tree is a giraffe etc. – there’s a serious point about the impact of these limitations.