Thursday 22 September 2016

Can The Internet’s Abuse Problem Be Solved?

Can The Internet’s Abuse Problem Be Solved

Sarah Dobbs looks into the ways various platforms are trying to combat trolls

At this point, there's no point trying to deny it: there is a massive problem with abuse on the internet. It's not a new phenomenon, of course. For almost as long as the web has existed, people have been using it to be horrible to one another just as often as they've been using it to share knowledge or make friends. Can you remember the first time you heard the phrase "don't read the comments"? It was probably years ago. More than a decade, even.


Recently the issue seems to have escalated, however. It's not just that there are people online deliberately trying to start arguments - it's that there are people online deliberately trying to ruin people's lives, just for the fun of it.

The most recent example to hit the headlines is the Twitter harassment of comedian Leslie Jones. Trolls started to target her when it was announced she'd been cast in director Paul Feig's Ghostbusters remake, and their attacks reached critical mass in the week the film was actually released. There isn't space here to discuss their motivations in great detail, but suffice to say, the torrent of vitriolic racist abuse aimed at her wasn't just coming from disappointed Bill Murray fans.

Twitter Terror


Jones fought back at first, but eventually conceded defeat and abandoned her Twitter account for a while until Twitter CEO Jack Dorsey responded to her pleas for help. The site deleted the accounts of many of the trolls Jones had highlighted and issued a statement confirming that Twitter doesn't condone abusive behaviour and will take action against it. Jones resumed tweeting from her account, but the trolls hadn't finished with her - at the end of last month, hackers accessed her personal files and posted many of personal, private photographs on her website, along with some racist memes.

The Twitter abuse hasn't stopped, either. A visit to her account - @lesdoggg - on Twitter will amply display the nastiness she receives on a daily basis, and could cause you to lose some faith in humanity in the process. She's just one example, though: other users may not attract the volume of hatred she does, but there's still plenty to go around. Sure-fire ways to render your Twitter Mentions unreadable include participating in political discussions (especially if you use hashtags) or being a woman with an opinion on literally anything, but even avoiding those things won't necessarily guarantee you won't one day find yourself a target.

Of course, it's not just Twitter where this happens. Anywhere people can interact with people they don't know is a potential minefield, whether that's Instagram, Tumblr, or just the comments section of your favourite news site. So far, no-one really seems to have much of a plan for dealing with it. Is this just how things are going to be online now? Will internet hatemobs eventually drive more sensible types offline entirely, or are we all going to have to be extra careful to censor ourselves to avoid calling down their rage on our heads? Let's take a look at the strategies that are currently in place for cutting off abuse.

Reporting


Most sites seem to use the same standard procedure for tackling abuse: install some kind of 'report' button, have users report other users for posting inappropriate material, and then employ someone to review those reports as they come in. At first glance, it seems like it should work, right? It probably would if we were just talking about a handful of people occasionally posting something horrible but, in practice, it's failing. Badly.

The Leslie Jones example demonstrates one of the reasons why: relying on users to report abuse means, well, users have to report abuse. While some systems, like the Disqus commenting platform, include buttons to flag comments without needing to load up another screen, others make reporting abuse rather more arduous. Instagram, for example, demands need to click through to a commenter's profile before you can make your report on what they've said. Twitter is better than it used to be in this respect, and you can now click on a drop-down menu from an offensive tweet to hit Report.

A popup window will ask you why you're reporting it, in what specific way it's offensive, and whether it's targeting you or someone else. Then it'll display a list of the user's other recent tweets, and if other tweets are similarly abusive you can add up to a further five tweets to your report. Finally, it gives you the option to mute or block that account, so you won't see any further nastiness from them. If you're doing it once, it's a fairly simple process that should provide Twitter's support team with all the info they need to decide whether or not to remove the account. If you have to do it 20 times, though? It gets a bit more arduous. And if you need to report literally hundreds or thousands of abusive tweets? That starts to feel like an unmanageable burden.

There's another problem, too, and that's that one of the trolls' main goals is to silence people they don't like. Abuse reporting systems like the ones Twitter, Facebook, Instagram and more sites use allow trolls to waste services' time, and can even result in legitimate accounts being temporarily suspended.

Given that, some kind of variation on the report abuse mechanism exists everywhere, and yet there's as much abuse posted online as ever, it's pretty clear it's not working. So what else can be done?

Real Names


The core question here probably concerns what it is about the online space that allows people to say horrible things that they'd never say them to someone in person? Over the years, researchers have attempted to figure out what it is about the internet that creates such a fertile ground for vileness, and the most common conclusion is anonymity. Online, no-one really knows who you are. You can sign up for an anonymous Twitter account in a matter of seconds, and then it appears you can say whatever you want to anyone you want, without any fear of repercussions.

So could making people associate their online identity with their offline one help curb the abuse issue? Well, maybe. Several platforms have already tried it. YouTube comments used to be one of the worst cesspools for trolls but, at the behest of its owner Google, you now need to use your Google account to comment there. For many of us, our Google account is our main online identity, and it's used across many different sites and services, so it feels less disposable that a quick sign-up and more real than some other usernames. Nowadays, YouTube isn't quite as bad as it used to be, so maybe it's working.

The system's not perfect, however. You only have to look at Facebook to see that. Ever had an argument with a friend of a friend on Facebook, and been shocked at how vitriolic they were willing to be, right where all their friends and family could see it? Yeah, me too. Sites that use Facebook logins for comments also don't tend to be markedly more civil than sites that use other logins, so it's clear that some people just aren't bothered about signing their names to outright abuse. Plus, as Facebook has also illustrated, it's tremendously hard to make people use their real identities online, and if you try, many people will resent it - some for very good reasons.

You might not mind using your real name online, but if you had a stalker, you might be less keen. Or if you lived in another country where the things you say online could land you in real life trouble? Then you might need another online ID, even if all you're doing is mildly criticising your government. You can probably think of other reasons someone might not want to have their real name published online, too.

Twitter seems to be thinking about the connection between anonymity and abuse, though. Recently, it changed its verification process. Previously, there was no way to apply for the coveted 'blue tick' denoting you absolutely were the person you claimed to be, and all you could do if you wanted one was wait and hope someone on Twitter's team thought you were worthy of the accolade. Now, you can apply. It's still not for everyone - you have to be some level of famous or well-known to get one - but a lot more people are verified than before. However, if you are verified, you can choose to filter the tweets you see so that you're only hearing from other verified people.

There are obviously problems there, too, but it's a feature that could at least reduce the amount of trolling high-profile types - or, for example, journalists and writers have to read.

New Tools


Speaking of high profile types, another recent super-famous target for abuse was singer-songwriter Taylor Swift. After a very public beef with Kanye West and Kim Kardashian, Instagram users started targeting her photos, leaving comments that consisted of nothing more than dozens of snake emojis. Actual fans saw their comments buried under snakes. Then, mysteriously, those comments disappeared. Instagram didn't explain exactly what had happened but did admit that it's testing new tools for dealing with abuse, and that some users have been given preview access to it, to try it out before it's rolled out to the masses.

Twitter, too, has announced that it's working on something to help filter out abuse. The filtering tool has apparently been in development for at least a year, and works by detecting specific keywords and filtering tweets that contain those words. We won't print any examples, but you can probably think of a few likely candidates.

Knowing that there are things in development that might help clean up the internet's grimier corners should be reassuring. After all, it wasn't too long ago that you might find offers from Nigerian princes and Viagra salespeople landing in your inbox, but email providers have now more or less got the spam issue under control - it might still get sent, but for the most part, you'll only see it when you delete your spam emails. Could a similar system work for comments systems?

Again, it's only part of a solution. A lot of websites - including Facebook Pages - have lists of banned words that will lead to posts being automatically removed. It doesn't take long to work out ways around such filters, though, and it's not always words that are the problem. A lot of Jones' tormenters used images, which are harder to filter out. A lot of problematic words might be used by people in other contexts too, and banning them completely might cause problems.

Twitter's solution needs to be a lot cleverer than just a swear word filter. It'll have to be super intelligent to differentiate between internet dialect and the kinds of messages designed to bully the recipient.

What Else?


So how can we stop the internet becoming unusable for anyone who doesn't fancy daily abuse? Well, anti-troll software would help. So, too, would clearer guidelines from websites and social networks about exactly what is and isn't acceptable, and what the consequences are, backed up by the will to actually enforce the rules.

Beyond that, its clear that the solution to the online world's troll problem is more complicated than magic anti-rudeness software. While the internet makes it easier to harass people - both because of the anonymity, and because platforms like Twitter make us much more accessible to many more people than ever before - what we're dealing with here isn't just an internet problem.

The internet is just, to misquote a classic film, people. Maybe once the offline and online worlds were separate, but now it's clear that they're very closely related. As a culture, we can't afford to and dismiss the issue as being just 'the internet'. It doesn't have to be like this, but people need to know that there are consequences for their online behaviour, just as there are in the real world.

To some extent, we need to examine our own behaviour, and that of the people around us, and question whether it's really okay. We need to hold one another to account, and not let the bullies win. There's not an easy answer, because there's never been an easy answer; there'd never be another war, or another murder, or another child crying at school because they're being teased, if this was an easy thing to do. But if we don't try, we might as well just give up.

Twitter, Facebook, Instagram, and all the rest, need to work harder to protect their users (and not just their celebrity ones) but also, we need to do a better job of protecting each other.