Finding joy in mathematics

I have always regretted losing touch with maths during high school. Part of this is undoubtedly my fault. I wasn’t a great, or really even that interested in being, a student until midway through university. But there’s also something a bit broken in how we approach maths. Both in school and life generally.

I’m talking about maths as a purely abstract phenomena. A series of formulas and steps, divorced from how they relate to the real world. Where multiplication is a table to be memorised and trigonometry takes place purely within a textbook.

This kind of mathematics not only strips away a lot of the beauty and joy, but relegates the subject to one only grasped by those who excel in a particular system. It turns maths into something like an ecclesiastical language, almost scary to the unindoctrinated.

This is a shame, really, as Lara Alcock writes in Mathematics Rebooted:

“…mathematical thinking is not magical. It is often thought of that way in our culture, where it is common to have a demanding career or to run a happy and successful household, yet to say, ‘Oh, I am terrible at maths.’ I hear this a lot, and every time it is clear to me that it cannot really be true: this person is obviously a capable thinker. ”

To some extent abstraction is necessary in schools as currently constituted. Students must be judged against something objective, and must be taught at scale. But does this really require so much concentration on the doctrine, to the detriment of the art?

Why is maths largely rote, rather than logic? Abstract rather than practical?

“Your mathematical knowledge might be rusty and full of holes, but people who can function well in our complicated world must be good general thinkers, and mathematics is just general thinking about abstract concepts.”

When you read about the likes of Newton and Galileo, maths jumps out as a tool for problem solving and creativity. In The Triumph of Numbers, I. Bernard Cohen explores how numbers and maths have evolved over time, and describes a plethora of interesting applications.

Including the algebra of morality:

“[Francis Hutcheson] used this algebraic relationship to translate several commonsense notions about morality into mathematical language. The first is that if two people have the same natural ability to do good (A), the one who produces more public good (M) is more benevolent (B). Conversely, if two people produce the same amount of public good, the one with more ability is less benevolent (since it was in that person’s ability to do more). The plus/minus sign in the equation allowed Hutcheson to factor in self-interest.”

“…he concluded from his algebra that “in equal Numbers, the Virtue is as the Quantity of the Happiness, or natural Good.” That is, he taught that “Virtue is in a compound Ratio of the Quantity of Good, and Number of Enjoyers.” This led him to the important conclusion that “that Action is best, which accomplishes the greatest Happiness for the greatest Numbers.” Here is a precursor, by more than 50 years, to Jeremy Bentham’s (1748–1832) utilitarian philosophy of “the greatest happiness for the greatest number.”

This is a maths of reasoning and personal application. He wasn’t trying to calculate the change from a $20 or the tensile strength of a beam.

Similarly, in The Calculus Story, David Acheson produces probably the best explanation of how to calculate the area of a circle, by imagining a polygon with more and more sides (I think I finally understand pi r squared).

As you might be able to tell, I’ve been reading a fair amount of maths books recently, and the thing that strikes me is how differently mathematicians approach the subject than how I was taught in school.

Mathematicians work through subjects largely by reasoning and logic, not necessarily ever more complicated formulae. They also emphasise the problem solving nature of maths, often tasking you to think about a problem and come up with your own generally applicable rule.

Putting this into practice, I have also been studying maths using Brilliant.Org, which has a similar philosophy:

“In school, people are often trained to apply formulas to rote problems. But this traditional approach prevents deeper understanding of concepts, reduces independent critical thinking, and cultivates few useful skills…The capacity to think critically separates the great from the good. We can grow this capacity by trying — and often failing — to solve diverse, concrete problems.”

I have only finished one module on Brilliant and I’m not sure how it will work as a method for the masses. But I will report back in a couple of months.

In the mean time, I’ll leave you with Lara Alcock again, whose book I really recommend:

“School mathematics tends to come in horizontal slices: children learn basic ideas about several topics, then, the next year, they learn slightly more advanced ideas about those topics, and so on. This is entirely sensible. But it means that the vertical links are not very salient, which is important because mathematics can be seen as a highly interconnected network in which more sophisticated ideas build upon more basic ones. So this book’s approach is to focus explicitly on the vertical links. Each chapter starts with an idea that is bang in the middle of school mathematics—primary school mathematics in many cases—then takes a tour upward through related concepts, arriving eventually at ideas that people encounter in more advanced study”

Knowledge isn’t linear

Human progress isn’t a straight line. This is as true of knowledge as anything else. Only in a computer game does knowledge accumulate through discrete ideas, with defined benefits and pathways.

In real life knowledge is messy, unpredictable and often the result of jamming together ideas and experiences in unpredictable ways. While experts can identify pertinent questions and fields for investment, history is littered with examples of unexpected intellectual explosions.

All of this came to mind as I read that the Australian government wants recipients of research grants to prove that it “advance[s] the national interest”.

Let’s overlook that the “national interest” and common sense are subjective, ever changing, and at least partially driven by intellectual progress.

One such intellectual explosion is chronicled in The unfinished game. Keith Devlin tells the story of a series of letters between Blaise Pascal and Pierre de Fermat as they try to solve what is essentially a problem for gamblers.

Called the problem of points it posits a game where two players have equal chances of winning each round. For some reason the game is disrupted before anyone has won, and so the question is how to divide the pot fairly.

Pascal and Fermat exchanged a series of letters on this problem. Although this process doesn’t pass muster as research by modern standards, would this question have passed the politicians’ test? I doubt it. But the impact has been profound.

“Within a few years of Pascal’s sending his letter, people no longer saw the future as completely unpredictable and beyond their control. They could compute the likelihoods of various things’ happening and plan their activities—and their lives—accordingly. In short, Pascal showed us how to manage risk. His letter created our modern view of the future.

From what else I’ve read, the author oversells the letters a little bit. But it was undoubtedly a precursor of modern probability theory. It was part of a movement that profoundly changed the world.

“Even those who are not schooled in the mathematics of calculating odds know that the future is not a matter of blind fate. We can often judge what is likely to happen and plan accordingly. Yet before Pascal wrote his letter to Fermat, many learned people (including some leading mathematicians) believed that predicting the likelihood of future events was simply not possible.”

Without the ability to quantify risk, there would be no liquid capital markets, and global companies like Google, Yahoo!, Microsoft, DuPont, Alcoa, Merck, Boeing, and McDonald’s might never have come into being.”

“Within a hundred years of Pascal’s letter, life-expectancy tables formed the basis for the sale of life annuities in England, and London was the [centre] of a flourishing marine insurance business, without which sea transportation would have remained a domain only for those who could afford to assume the enormous risks it entailed.”

All of this isn’t to say that government’s can’t and shouldn’t roughly guide their research dollars. Some questions are more pressing or have more potential than others, which is why the current Australian system contains peer review.

But that knowledge is a simple widget or knob to be turned up or down for national benefit is an ahistorical view of progress.

Tesla is finding the limits of “naive innovation”

“If that car was made anywhere else, and Elon wasn’t part of the manufacturing process, they would make a lot of money,” Munro said in an interview. “They’re just learning all the old mistakes everyone else made years ago.”

From an interesting Bloomberg article. A group of analysts tore apart a Tesla 3 and were impressed by the technology, but found the execution lacking.

This is a really common narrative out of Silicon Valley. It’s a place that lauds “naive innovation” – that an outsider can come up with solutions the experts just overlook.

“Munro found that Tesla reduced the amount of wiring snaking through the car by concentrating a lot of the electronics in small circuit boards. That’s knowledge from Silicon Valley that the carmakers don’t have. The trick now is turning this established technological advantage into consistent profits—and to do that Musk needs to hire executives with experience in the nuts and bolts of carmaking…”

In some respects naive innovation has served Silicon Valley well. SpaceX is a perfect example. But it can discount valuable experience.

My favourite is this interview with Spotify co-founder Daniel Ek:

When Google turned him down for a job, he thought, “I’ll just make my own search engine, it can’t be that hard,” he said. “It turns out it’s really, really hard.

It also reminds me of many anecdotes in Bad Blood, the story of biotech startup Theranos. The founder of which has since been indicted for fraud.

Theranos aimed and claimed to have invented a device that could do myriad blood tests on just a drop of blood. But there’s a reason the many well-capitalised and incentivised biotech companies had failed.

From Bad Blood:

“The ability to perform so many tests on just a drop or two of blood was something of a Holy Grail in the field of microfluidics. Thousands of researchers around the world in universities and industry had been pursuing this goal for more than two decades…”

“But it had remained beyond reach for a few basic reasons. The main one was that different classes of blood tests required vastly different methods. Once you’d used your micro blood sample to perform an immunoassay, there usually wasn’t enough blood left…”

“Had Steve Burd been allowed inside the East Meadow Circle lab, a network of rooms located in the center of the low-slung building, he would have noticed that it didn’t contain a single Theranos proprietary device. That’s because the miniLab was still under development and nowhere near ready for patient testing. What the lab did contain was more than a dozen commercial blood and body-fluid analyzers made by companies such as Chicago-based Abbott Laboratories, Germany’s Siemens, and Italy’s DiaSorin…“


The experts quoted in the Bloomberg piece gush about Tesla’s technology. But there’s a reason many of us are bearish about Tesla – 150 years of institutional knowledge isn’t something to be scoffed at.

What makes a good shot?

When we observe a success or a failure, we are observing one data point, a sample from under the bell curve that represents the potentialities that previously existed. We cannot know whether our single observation represents the mean or an outlier, an event to be on or a rare happening that is not likely to be reproduced.

This is another passage from The Drunkards Walk by Leonard Mlodinow. It highlights a common bias, especially in the public space; that we judge actions purely by their results.

We readily assign praise for success and blame for failure, despite not knowing the probabilities and tradeoffs, or how the decision was made.

Take basketball for instance. The NBA regular season is just about to start and so we can expect plenty of ooing and aahing. But a shot going in is not what makes it good. Just as a missed shot is not necessarily bad.

A good basketball shot is one that maximises the expected value – taking into account both the probability of scoring (the player’s skill, whether they are guarded etc.) and the value of the shot (one, two or three points). A good shot is one that you can take again and again, regardless of whether you miss one or even a sequence, leaving you ahead in the long run.

A good shot is unlikely to be the one that makes you gasp, or that you remember later. A high degree of difficulty isn’t what we are looking for. Hitting an off-balance shot with time running out should be the exception.

It’s time for other arguments about climate change

The “debate” about climate change is so poisoned it has brought down at least two Australian prime ministers, and the very term is redacted from US government websites.

So maybe it’s time to retire, or at least rein in, this line of argument. The externalities produced by burning coal and oil, from factory farming etc., have many facets that can be tackled. Notably, health.

Take this recent study on air pollution from researchers at Arizona State:

“We find that a 1 microgram-per-cubic-meter increase in average decadal exposure (9.1% of the mean) increases the probability of receiving a dementia diagnosis by 1.3 percentage points (6.7% of the mean). This finding is consistent with hypotheses from the medical literature.”

“Burgeoning medical literature provides reason to suspect that long-term exposure to elevated pollution levels may permanently impair older adults’ cognition, especially in the case of particulates smaller than 2.5 microns in diameter, commonly known as “fine particulate matter” or “PM2.5”. The small size of PM2.5 allows it to remain airborne for long periods, to penetrate buildings, to be inhaled easily, and to reach and accumulate within brain tissue. The accumulation of particulates in the brain can cause neuroinflammation, which is asso-ciated with symptoms of dementia…”

So, emissions are not just harmful to the environment, but human health as well. The suffering isn’t only in the long term, evident only in a computer model, but in the health of real people living right now.

It’s also worthwhile thinking about who bears the brunt of this. The workers in industries like mining, obviously. But as a recent hurricane in North Carolina showed, polluting industries are also often situated in poorer areas:

“Even after adjusting for socioeconomic factors — and even without a hurricane — life expectancy in southeastern North Carolina communities near industrial meat growers is lower than in places without these hog operations. A recent study published in North Carolina Medical Journal found that residents near the industrial animal operations had higher rates of all-cause mortality, infant mortality, mortality from anemia, kidney disease, tuberculosis, and septicemia, and higher rates of emergency room visits than the residents in the control group.”

As Ketan Joshi has noted, denying climate science is now akin to being an anti-vaxxer both in the scientific illiteracy required as well as the harm being wrought. But we can’t expect to win this fight, especially in the short time we have to take action. Instead, we should change the subject. There are plenty of other arguments to make.

Subscribe!

Following your *ahem* favourite blog can be a pain. Subscribe and get new posts right in your inbox once a week.

You have Successfully Subscribed!