Seven secrets of the online world (aka a beginner’s guide to algorithms)

profiler

MPA (400+ posts)
Hannah Fry, the author of this year’s Royal Institution Christmas lectures reveals the weird and wonderful world of data crunching, and just what it can do.

The word “algorithm” isn’t exactly one that inspires excitement wherever it goes. But like it or not, it’s a word that’s here to stay.

If you ask me, algorithms are a strange bundle of contradictions. They’re used so much online – for example every time you search for something – that they’re almost mundane, but they remain strangely alien at the same time. They’re ubiquitous and invisible. And they’re having a profound impact on virtually every aspect of our society, yet somehow manage to be surprisingly easy to ignore…

But ignore no more – here’s a handy guide to help you navigate this bewildering new world.

1. What is an algorithm?

You can think of an algorithm as series of instructions: like a recipe. It takes a bunch of ingredients, jumbles them up and spits out some kind of output at the end. The ingredients are almost always some kind of data: maybe your internet search term; a scan of your face; or a long list of the music you’ve downloaded. And the outputs are things we all recognise: the front page of Google; a facial recognition match unlocking your phone; or a new band suggestion on Spotify.

2. Who uses them?

You’ll find them in all the obvious places online where there are searches and personalisation – from online shopping to streaming services. But algorithms are being deployed far more widely. In the past few years, people have started outsourcing and automating all kinds of human decisions.

Algorithms are in court rooms, helping judges decide what to do. In football clubs, helping managers decide how best to coach their players. In hospitals, helping doctors decide who to treat. In company recruitment offices, deciding who should be given a job. Basically: they’re everywhere.

3. They make mistakes.

The amount that can be achieved by these monumental new tools is staggering. But be in no doubt - these things are far from perfect.

That might sound obvious – after all, anyone who has ever been recommended a product they didn’t want to buy online, or a film they didn’t want to watch, already knows that algorithms don’t get it right all of the time. But somehow, it’s quite hard to remember that fact when we’re distracted by the shiny promises of new technology.

There are reports that some hotels are considering switching out key cards in favour of using facial recognition to let their guests into their rooms. Even if such a thing worked 90% of the time, a hotel with a hundred rooms will still have at least ten guests a day traipsing up and down the corridors asking for help after being locked out. Not the best setup for a relaxing stay.

But mistakes can have serious consequences. If an algorithm is used to filter out CVs for a job, or to assign benefits, or assess how well a teacher is performing, the inevitable errors can end up having a major impact on people’s lives. Especially when we have a habit of over trusting the say so of a computer.


4. We find them hard to overrule.

We’ve all heard stories about people blindly following their sat navs, only to stop at the dead end of a cliff edge. And while it’s easy to laugh at their mistakes, it’s also important to be honest with ourselves – believing in the say-so of our technology is really difficult to avoid.

And this can be a serious problem when you start involving algorithms that can make mistakes in situations where people’s lives are at stake.

Take, for instance, one particularly controversial use of algorithms. When a defendant is accused of a crime, an algorithm can predict their chances of going on to reoffend. This prediction is used in some courtrooms around the world by judges to decide whether or not to award the defendant bail, and in some cases, to decide how long someone’s sentence should be.

Last year, some academics showed that these algorithms did no better at predicting what lay ahead in the future for a defendant, than if you just gave a room full of people the same information and asked them to guess whether the defendant would commit another crime or not.

In the Christmas Lectures, we’ll argue that there is a place for algorithms in the courtroom. But you’ve got to be careful. Because the trouble is, what’s harder for a judge to ignore? A prediction from a super swanky piece of artificial intelligence? Or an stab in the dark from a bunch of complete strangers?

5. They don’t make the same mistakes for everyone.

The algorithms might be based on maths, but that doesn’t mean they’re not biased.

For example: an early recruitment algorithm worked out that people were more likely to stay longer at a firm if they had shorter commute times to work. As a result, they filtered out anyone who lived more than an hour away from the office.

Seems fair and sensible on the surface. But the result was that only those from an affluent background could afford to live close to the city centre where the office was located. Without meaning to, the algorithm had just eliminated anyone from a lower socio-economic group from applying.

These kinds of unintentional biases are everywhere: if you build an algorithm to detect cancer cells in breast tissue, and only use the scans from hospitals with predominantly Caucasian patients, you’ll end up with something that isn’t as good at spotting cancer in women from other ethnicities.

Likewise, facial recognition, which was largely designed and tested on groups that lacked diversity, has been notoriously bad at recognising faces with darker skin tones.

The consequences are serious: this isn’t just about the annoyance of not being able to unlock your phone. If driverless cars are unable to recognise (and potentially avoid) certain pedestrians, or the passport control booths only work for certain types of people, then our algorithms are effectively just automating an unequal society.

Thankfully, this is something that people who create algorithms have become focused on in the past couple of years and are actively trying to improve.

6. How does an algorithm know when it’s right?

But here’s my biggest warning of all. Watch out for the snake oil. There’s a worrying trend of algorithms based on junk science. Thankfully, there are ways you can spot them. Start by asking yourself this simple question: how does the algorithm know when it’s right?

This is easy to answer for a decent algorithm. For instance facial recognition: you’re either the person it’s looking for or you’re not. The algorithm knows when it’s right.

By contrast, there are some algorithms that claim to be able to tell someone’s emotions and personality from video footage. They’re used in interviews to assess whether someone will be a good fit for a company. Here’s the question: how can it possibly know if it’s worked out your emotions correctly?

Let’s be honest, can you even describe your own emotions? Especially in something like an interview, when you’re probably a mixture of excited and apprehensive all while pretending to be confident.

7. Ask yourself who they’re working for.

A famous survey from 2016 asked people who a driverless car should save, if forced to make the choice in the event of an unavoidable collision, between the passengers, and any innocent bystanders.

A whopping 76 per cent of respondents felt it would be more moral for driverless vehicles to save as many lives as possible, thus killing the people within the car.

And yet, when the same study asked participants if they would actually buy a car which would murder them if the circumstances arose, they suddenly seemed reluctant to sacrifice themselves for the greater good... How very surprising.

This is something worth remembering outside of the world of driverless cars. Algorithms are built to achieve an objective, but the desired outcome can change depending on who you ask and how you ask them.

Often, what’s best for you will align with what the algorithm is trying to do: insurance algorithms might help drive down your costs, facial recognition algorithms might help make queues move faster (at least for some people). But it’s important to recognise who the algorithm is working for: YouTube, Spotify, Audible, Facebook and the like – they’re all motivated to serve you things that you like, but not because they have a deep-rooted desire to see you entertained. They’re merely trying to keep you on their sites as long as possible, to sell you things and serve you adverts.


Find out more about the awesome power of this brave new world, for both good and bad, in the Christmas Lectures.

The Royal Institution Christmas Lectures, 2019: Secret & Lies - The Hidden Power of Maths on the 26th, 27th and 28th of December, BBC Four at 8pm.


 

Back
Top