If you’re going to change the world, you must reflect it first

I find taking public transport or hopping a plane immensely stressful. Not because of the shoddy infrastructure, waiting around, or poor service. Because I’m 6″4 with disproportionately long legs in a world built by people who aren’t.

As I continue to read Coders, I’m increasingly worried how this same phenomena will play out in a world full of algorithmic black boxes. Code so complex and systems so arcane that even their creators struggle to understand them.

Techies love to talk about scale and putting their creations in front of millions. But for this to work they themselves need to be drawn from a representative pool.

Otherwise you get self driving cars that are more likely to hit black people. Or image recognition that thinks black People are gorillas.

…then Alciné scrolled over to a picture of himself and a friend, in a selfie they’d taken at an outdoor concert: She looms close in the view, while he’s peering, smiling, over her right shoulder. Alciné is African American, and so is his friend. And the label that Google Photos had generated? “Gorillas.” It wasn’t just that single photo, either. Over fifty snapshots of the two from that day had been identified as “gorillas.”

This isn’t only a Google problem. Or even a Silicon Valley problem. There are also stories of algorithms trained in China and South Korea that have trouble recognising Caucasian faces.

As a journalist with a diverse ethnic and cultural background I had trouble understanding why my editors took so much convincing to run foreign stories. With a family spread around the globe, I could see myself in the Rohingya as much as an Australian farmer.

These issues are linked – what we value, notice and think of as “normal” are all informed by our personal stories. If you grow up or work in a monoculture, that will influence the issues you see, the solutions you propose and contingencies you plan for.

But the world isn’t a monoculture. There are 6″4 people who would like to ride the bus. There will be people who aren’t like you but need to cross the street safely, or be judged fairly.

Who will be deeply offended by racial epithets, which are themselves linked to why they aren’t represented in a database.

If you’re going to try and change the world for the better, you need to be of the world. There will always be edge cases, but without diversity they will be systemic. They will be disastrous.

…why couldn’t Google’s AI recognize an African American face? Very likely because it hadn’t been trained on enough of them. Most data sets of photos that coders in the West use for training face-recognition are heavily white, so the neural nets easily learn to make nuanced recognitions of white people—but they only develop a hazy sense of what black people look like.

As always my emphasis.

Why techies think they can change the world

People who excel at programming, notes the coder and tech-culture critic Maciej Cegłowski, often “become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence.

This is from Coders, a book I only just downloaded but am absolutely tearing through.

The subtitle is “how software programmers think, and how their thinking is changing our world”, which is a clue to what Ceglowski is referring to.

When you’re writing code you’re trying to break a process down, to first principles and then into easy steps as you go along.

You build it back up in an environment over which you have a huge amount of control, that thrives on trial, error and iteration.

Where something usually either works or breaks obviously. Everything is very structured and built upon logic.

But by this point you’ve also abstracted so much you can trick yourself into thinking you’ve mastered all the nuances, not just how to get from A to B.

It’s also an alluring way of thinking, which you begin applying to other problems in your life. In a similar way to how you can start thinking in another language if you are sufficiently steeped in it.

This is a fantastic book so far. Hope to post some more.

As always my emphasis.

We need more technological ’bilingualism’

Something that bugs me is the widespread, continued segregation of technology. Technology might be something you use, but rarely is it something you make or even understand.

I’m currently thinking of all the people who I WhatsApp whose eyes gloss over at the mention of encryption.

This is often reflected in organisations, with technology teams separate from the rest of us. There are obviously some exceptions. Notably companies where the product is technology. But more often there is little overlap between those that make the product (etc.) and the nerds who make it possible.

The overlap often takes the shape of particular individual(s). But we all have to become the nerds who make it all possible.

Technology is no longer something separate from the rest of our lives. If it ever was.

In newsrooms, the tech nerds are often off somewhere leaving the overlap to take the form of data or multimedia journalists. This isn’t enough. There are too many important stories, too much to miss, misunderstand or underestimate; not to mention too many productivity enhancements, for computers to just be a blunt instrument.

Anyway, all of this came to mind as I was reading an MIT Tech Review article about the launch of a new multi-disciplinary college at the University:

“The world needs bilinguals,” said MIT president Rafael Reif. In other words, the world needs engineers with a better grounding in the liberal arts, who can build more ethical products and platforms, as well as policymakers and civic leaders with a better understanding of technology to help guide responsible innovation…

…Faculty at the new college will work with other MIT departments to cross-pollinate ideas. Classes will also be designed so that technical skills, social sciences, and the humanities are bound up together within each course rather than learned separately.

I agree. But I’m not sure the solution lies in more cross-disciplinary study (although it is definitely necessary) as much as it does in employers etc. valuing people who aren’t cookie-cut candidates, who have more diverse or even tangential experience.

I’m not sure how to do that.

As always my emphasis.

Inconsistent scapegoating

The people who have always been able to do something about this — the ones building the software — have always known when their software was doing something wrong. It’s their job to find bugs, and if they’re worth their salt, they’re always looking for flaws in the overall design, as well as the functional components of what they’re building. They know that violating user privacy without consent is a bug. Operating in a way inconsistent with the user’s expectations is a bug. Coercing people into using your product with psychological tricks is a bug.

Though many of these poor designs are instigated by higher-ups, they are ultimately implemented by professionals with a deep knowledge of their field. Designers know when they’re mocking up screens that prey on people’s most basic desires; developers know when they’re implementing designs that would feel incredibly wrong as the end user.

This is from an old article by Matt Baer, and is talking about problematic business models in general on the internet.

It isn’t about Facebook or any specific scandal.

Still, it’s an interesting thought. It isn’t just Zuckerberg or Kalanick or [pick your leader of scandal generating tech company].

The people that write the code and design the interfaces know what they’re doing. Perhaps we should hold them as morally responsible as we do financiers?

(As always my emphasis)