Wednesday 22 October 2014

How algorithms control your life

algorithms

Algorithms are choosing the movies we watch, the people we date and the news we read. Stuart Turton investigates how they’re taking everyday decisions out of our hands.

Imagine a shotgun blast scattering birds from the trees, their indignant squawks fading into the clear blue sky. That’s pretty much what happens when you toss the word “algorithm” at major tech companies. In happier times, they paraded their algorithms before us like proud parents, explaining how their little bundles of decision-making joy improved Google searches, filtered the boring from Facebook and made memories on OkCupid. Unfortunately, it turns out that the kids have a wild streak. In the past few months, they’ve been accused of censoring newsfeeds and tampering with our emotions - oh, and rigging the stock market.

Here, we reveal how far the algorithmic tendrils have spread through society, investigating whether companies are neglecting their ethical responsibilities, or whether we’ve simply misunderstood the technology underpinning this brave new world.


ATTACK OF THE ALGORITHMS

Remember that scene in The Wizard of Oz, where Dorothy pulls back the curtain to find the Great and Powerful Oz is actually a bumbling old man yanking levers? Well, something similar happens when you begin looking at algorithms, except the levers are yanking themselves. Rather than the Machiavellian machine code we’ve come to expect, algorithms are simply complicated lists of instructions with a goal at the end intended to mimic the human decisionmaking process. This is handy, because being able to make decisions is awesome. In fact, the only thing better than making decisions is having somebody else do it for you, which is probably why algorithms are so ubiquitous. They’re running lifts, making trades on the stock market, and setting book prices on Amazon. And for the most part, they’re doing it rather well.

“Algorithms are useful everywhere we can put together a sequence of tasks to achieve a goal,” says Paul Firth, a former programmer for Sony Computer Entertainment. He now runs wildbunny.co.uk, which teaches people how to build their very own trading algorithms. “For example, a colleague was just discussing with me today his plan to automate the thermostatic control in his house to be more ‘intelligent’, and understand that, for example, in winter it should turn on early in the mornings so the house has more chance to warm up, and for it to be controllable over Wi-Fi from his phone. At the heart of his plan to improve the heating in his house lies an algorithm for controlling the temperature and responding to outside inputs.”

Netflix’s algorithm does something similar, but instead of controlling how warm your front room is, it’s designed to work out what you’ll fancy watching when you get home from work. It used to do this by analysing the metatags of the programmes you viewed. This meant that if you watched a five-star documentary, Netflix would recommend another highly rated documentary. Toss a few foreign-language films into the mix and Netflix would recommend enough highbrow entertainment to get you through a round of University Challenge.

Naturally, people hated it. The algorithm was doing its job, it just hadn’t learned the only thing necessary to doing it well: people lie. Wc want to like documentaries and foreign-language films, and we’ll be thinking that when we turn them off to watch Zoolander for the 15th time.

Nowadays, Netflix’s recommendation algorithm monitors every decision made on the site, from what’s played, searched for and rated, to when a programme was watched and how long somebody persisted before turning it off. Instead of looking at an individual’s viewing habits, the algorithm groups us by interest areas and viewing patterns, predicting our desire by working out what people just like us enjoyed. To the algorithm, we’re blobs of consumption, rolling across its library slurping up everything in our path. Its job is to sate us, giving us what we need rather than claim to want, and it’s evolving all the time.

“We’ve been working on introducing context into recommendations,” Xavier Amatriain, Netflix’s engineering director, told Wired. “We have data that suggests there’s different viewing behaviour depending on the day of the week, the time of day, the device, and sometimes even the location. But implementing contextual recommendations has practical challenges that we’re currently working on. Wc hope to be using it in the near future.”

YESTERDAY'S INTERESTS

Sounds positively utopian, doesn’t it? Sadly, the problem with algorithms isn’t that they’re trying to climb into your mind, it’s that they’ve had to climb out of somebody else’s first. Take Facebook’s newsfeed algorithm, dedicated to choosing which stories should be shovelled in front of your eager eyes. Facebook refuses to explain the inner workings of the algorithm, but we don’t need it to. We can look at any quote given by Facebook chief executive Mark Zuckerberg over the past few years.

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa,” he told his staff in a conversation reported in The Facebook Effect. An August 2013 blog post elaborated on this point, noting that “when a user likes something, that tells News Feed that they want to see more of 11; when they hide something, thal tells News Feed to display less of that content in the future. This allows us to prioritise an average of 300 stories out of 1,500 stories to show each day.”

It’s ideology as methodology, Zuckerberg’s view of the internet made manifest in an algorithm, and it’s one shared by Google and a host of newspapers. Increasingly, the content shown to us is decided by the things in which we’ve previously expressed an interest. The problem with this “filter bubble” approach became evident in August when Michael Brown was shot by a police officer in the US town of Ferguson. Brown was young, black and unarmed. It was news and people cared. Facebook did not. Its backwards-looking algorithm couldn’t see beyond the Ice-bucket challenge, or the celebrity gossip “Liked” yesterday, and didn’t show the news as one of Its trending stories. Human nature being what it Is, people Immediately cried bias.

“What if Ferguson had started to bubble but there was no Twitter to catch on nationally?” wonders Zeyncp Tufekci, a fellow at the Center for Information Technology Policy at Princeton University. “Would it ever make it through the algorithmic filtering on Facebook? Maybe, but with no transparency to the decisions, I can’t be sure. Would Ferguson be buried in algorithmic censorship?”

Facebook didn’t respond to our request for an interview, pointing us towards its blog instead. Even so, it’s worth noting that, at the time of writing, the company has yet to respond to the Ferguson outcry - probably because it’s embarrassing to admit you’ve spent millions of dollars on cutting-edge stupidity.

“I don’t think people are being realistic,” says Alys Woodward, a technology analyst at IDC. “Algorithms are the results of maths; they’re measuring what you put on Facebook and doing stuff with that data. There’s no agenda there. I think the concerns arise from a lack of transparency. You don’t quite know what’s going on and so it’s easy to believe it’s sinister. I think it’s also the result of a company cultural mindset - to statistics people, to engineers, this is all fine, this is what they do. The problem is they’ve come out in the past and admitted to doing things to Improve the algorithm, and they’ve been bitten by it. To them it’s all statistics, but when people are reduced to statistics, we find it worrying.”

Even so, Woodward doesn’t believe this laissez-faire attitude can last. “We’re still feeling our way through this stuff, asking questions, making mistakes. The ethics of what these companies are doing will settle down,” she says. “It’s going to be Important because increasingly it isn’t an option not to use these services. Think of Jobs: if you don’t use social networks, you don’t exist. I think we’ll start to see companies positioning themselves on their ethics, especially after the recent outcry.”

QUICK AND THE DEAD

Ah look at the Facebook haters, the Twitter deniers - rubbing their hands with glee because they use Jeeves instead of Google, their filter bubbles well and truly popped. Unfortunately, a tin-foil hat (no matter how shiny) won’t save you from the slithering tentacles of algorithms. Scientists at the Stony Brook University in New York claim to have created an algorithm capable of charting with 84% accuracy whether or not a novel will be a commercial success. As daft as that sounds, they reckon that books heaving with conjunctions, nouns and adjectives sell better than novels riddled with verbs. The algorithm also assesses how interesting the novel is, how original, and the quality of the writing. We ran this story by Penguin Random House, HarperCollins and Hachette and they all claimed they wouldn’t consider using it, but who knows how they’ll feel in five or ten years if the industry continues to contract?

After all, UK firm Epagogix is already using an algorithm to analyse movie scripts, picking out plot tropes and scoring them against their historical performance at the box office. Using this information, Epagogix calculates how much money the film will earn, allowing financiers to decide whether it’s worth the risk of making it. The software’s already being used by producers, which means algorithms are now sitting at board meetings, holding cigars and vetoing art.

And it isn’t only culture bending beneath the cold touch of unblinking logic. High frequency trading accounts for around 56% of traffic on the US stock exchange, with the algorithms responsible going about their business with almost no human oversight. These algorithms are capable of making billions of dollars’ worth of trades in only milliseconds, which has led the United States Congress, FBI, Department of Justice and New York State Attorney General to open an investigation into whether they rig the market in favour of those with the cleverest toys.

“The stock market has no human interface left, it’s this illegible complexity,” says Kevin Slavin, who runs the Media Lab at MIT. “If anything, we’re further away from understanding it than we’ve ever been. I’m not worried about a future in which computers dominate humanity, I’m worried about a present in which they crash spectacularly.”

Which brings us neatly back to The Wizard ofOz. Pull back the curtain today and you’ll find algorithms yanking the levers, but unlike our old man they won’t stop when the machine breaks. Worse still, they’re just as biased, belligerent and blinkered as the people who created them. There’s a reason companies don’t want to talk about algorithms anymore. It’s because we’ve started asking questions, and just like the Great and Powerful О?, they don’t have the answers.