Online abuse in gaming and over social media is widespread and severe. Rick Lane investigates why it happens, the damage it inflicts and how it can be stopped
It’s common knowledge that the Internet has a nasty side. From petty and vindictive arguments in comments sections to flaming in online games, the Internet can be a hostile and volatile place. Indeed, a whole lexicon has arisen around the subject of how to avoid the worst excesses of interacting online, such as ‘don’t read the comments’ and ‘don’t feed the trolls’.
The ugly side of the Internet is viewed as part of its essential nature, an unfortunate but necessary evil in its capacity as a free and open platform. Sure, someone swearing at and insulting you in a comment isn’t very nice, but it boils down to words on a screen from someone thousands of miles away; how hurtful can that be?
The truth is – very. What’s considered a general attribute of the Internet comprises numerous and specific problems such as abuse, harassment and cyberbullying.
‘I don’t really understand why they do it, to be honest,’ says Holly Brockwell, editor of Gadgette, a technology website designed specifically for women, a fact that makes her a frequent target of misogynists on social media alone. ‘You can see the things that anger people are women having an opinion, women having a platform. Starting my own business seems to have annoyed them.
‘Having opinions on my body and autonomy over my body seems to have annoyed them … they seem to find it really threatening.’
Women are particularly targeted by online abusers and harassers, both in the amount they receive and its severity, and it seems to be worse if they have a significant online presence and are outspoken about the issues with which they have to deal. Brockwell counts at least three instances in the last 12 months when online misogynists have lashed out against her.
Most recently was over the shutdown of the controversial app Stolen, which let users buy and sell people’s Twitter accounts like trading cards. Some individuals attributed the app’s closure to Brockwell, who had recently interviewed the creators and, ironically, highlighted its potential to be used as a tool for harassment.
Brockwell points out that, when the abuse happens, it is akin to a torrent. ‘When it comes in waves like that, it goes everywhere. I get it in my personal email, I get it on my website, I get it in my work email, work Twitter, my personal Twitter, my Facebook, my Instagram, my Linkedin – it feels like it’s pouring through the windows. You can’t get away from it. The only thing you can do is not go online, and that’s not really very fair.’
Brockwell describes most of this abuse as ‘stupid’; general insults that she personally finds not too hard to brush off, although that doesn’t make it any less unpleasant. The insults include ‘I hope you die’, and sexual or misogynist slurs, such as ‘whore, slut, bitch’. Occasionally, though, something especially insidious slips through the net. In one example that wasn’t triggered by anything specific, a harasser not only instructed Brockwell to kill herself, but referred directly to the suicide of her father.
‘It had a picture of the way my dad died,’ Brockwell says. ‘That guy has gone to the effort of making a new Twitter account specifically. He’d obviously looked into what happened to my dad, and he put the picture behind a bit.ly link so that he could see when I’d seen it.’
Brockwell’s experience may sound extreme, but it seems fairly typical of the experience of many women with even a moderate online presence. Abusers don’t simply hurl insults; they research your background, where you live, your friends, relations or partners, everything they can find about your personal life. It isn’t solely women who are targeted, of course, although misogynistic abuse online appears to be particularly endemic.
Why does online abuse happen?
So why does this happen? What drives the individuals behind online abuse to do it, and what effects does it have on those on the receiving end? The answer is both surprising and important. ‘It’s no different from any other form of abuse or bullying,’ says Phil Reed, professor of psychology at Swansea University.
Reed specialises in several areas, including online addiction and autism. ‘The characteristics of the people who abuse – in all sorts of ways – other people on the Internet are very similar to those whom you’d see in everyday life.’
Reed refers to a psychological model known as the Dark Tetrad, which comprises narcissism, Machiavellianism, and psychopathy. In other words, a powerful sense of self-regard, an inability to empathise with others and a need to manipulate other people to service your own ends. ‘If you have those together, you’ve got the kind of seeds of someone who can be quite abusive and manipulative.’
Online abuse is a particular problem because of how the Internet facilitates it. ‘If you go there [the Internet] with an impulsivity problem,’ says Reed, ‘that’s great for impulsive people because you get your answers straightaway … you get back what you put in, and if you tend to be impulsive to start with, you’re going to get more impulsive. If you’re aggressive to start with, you’re going to get more aggressive.’
A commonly cited problem with online abuse is how the Internet enables people to act anonymously; an abuser who knows they’re less likely to be caught in the act is only going to find this aspect encouraging. However, the issue is a little more complex – the removal of any kind of face-to-face interaction on both sides compounds the likelihood of abuse. It makes it easier for aggressors to abuse and harass, and it makes people at a higher risk of being targeted more likely to suffer abuse.
As an example, Reed refers to individuals with autism. ‘They use the Internet an awful lot for communication purposes because it cuts out all of that nasty, messy face-to-face stuff they don’t understand. However, because of their social problems and understanding of social responses, it’s quite a worrying area for them because they take things literally. For example, if a troll or abuser gets into a chatroom for people with autism and says something nasty to one of them, they can struggle to understand this event in a broader social context.’
Similarly, individuals who suffer from mental illnesses, such as depression or social anxiety, also tend to have ahigherthan average Internet use, as again it enables them to communicate with other people and discuss their own struggles in an environmentthat’s physically more secure for them. But this also raises the likelihood of their being attacked. ‘Because oftheir underlying depression, they’re very vulnerable to negative comments,’ says Reed. ‘And sadly, we’ve seen instances where people have decided to take their own lives because of things that people have said to them.’
In short, online abuse is often perpetrated by the same types of people as real-life abuse, and its effects should be taken just as seriously. The Internet makes the difference in that it increases the likelihood that abuse will occur, and also makes it harder to stop. The open and dispersed nature of the Internet means responsibility falls in the gaps between government, corporation and individual, while the ease with which abusers can hide makes effective policing extremely difficult.
Tackling the problem
So how can the problem be tackled? Firstly, it’s important to recognise the complexity of the issue. That doesn’t mean the difficulty of a finding a solution, or arguments for and against more effective policing, but the fact thatthere are many types of abusive behaviour that occur online, and that different areas and communities have different problems. In breaking down the issue and addressing diverging areas specifically, it becomes easier to identify and implement effective solutions.
That’s the approach of the Cybersmile Foundation, a non-profit organisation dedicated to tackling cyberbullying. Originally founded by parents whose children had been victims of cyberbullying, Cybersmile tackles all forms of digital abuse. However, last year. Cybersmile opened a new section of its website dedicated specifically to tackling abuse that occurs in gaming, meaning behaviour such as flaming and raging in online games.
"The gaming communities comprise a unique area when you look at harassment and abuse, and howit’s defined,’says Dan Raisbeck, co-founder of Cybersmile. To begin with, abuse in competitive multiplayer games is usually entirely reactive, and rarely transitions into the more malicious, persistent cyberstalking seen on social media.
Came rage
In addition, many of the toxic elements of competitive online gaming have been considered a natural part of that intense environment for many years. This is howthey want the game to be played,’ Raisbeck explains. This is howthey wantto interact with each other and it does become, for many gamers, this sort of constant narrative. What others would call maybe “flaming” or “raging” or whatever, to each other, is part of the adrenaline rush of playing these games to these people.’
Cybersmile’s approach to combating this problem is primarily educational, attempting to make everyone involved aware of the problems and encouraging them to seek solutions. They approach the community and try to raise awareness about these issues, working with spokespersons such as professional Counter-Strike player Stephanie Harvey to engage with the community. “We find that coping strategies emerge through the users themselves who’ve had experience, and you'll find a lot of interesting videos from streamers and gamers about controlling rage online,’ says Raisbeck.
Cybersmile also helps individuals who have been the targets of in-game abuse by providing support and information about howto better protect themselves while playing online. ‘Engaging with the community does come with risks sometimes. It’s worth doing your homework, researching the game, getting involved in seeing what support they have, seeing what the reporting procedures are like, asking whetheryour personal details are being stored correctly, and so forth.’
Lastly, Cybersmile liaises with the developers themselves to discuss the problems within their communities. ‘We find that a lot of them are trying to take steps to manage this problem,’ says Raisbeck. A good example is Riot games, creator of the enormously popular League of Legends. Games such as League of Legends are notorious for their toxic communities and, as a result, Riot now employs a team of designers whose job is to analyse data regarding what goes on in the game’s chat system, and create machine-learning algorithms that reward positive behaviour, and punish rage and abuse. As of September 2015, it was reported that 92 per cent of players who had been caught by the system using abusive language hadn’t reoffended.
With the right motivation and a little ingenuity, it’s possible for companies whose products are community-reliant to cultivate friendlier environments. It requires motivation, though, and when it comes to social media, the desire for change doesn’t seem to be as urgent.
‘So far, nothing I’ve ever reported to Twitter or Facebook has been judged as breaking their rules,’ says Brockwell. ‘Eventually, accounts have been suspended when I’ve kept up about it and I wouldn’t leave it alone,’ she says, but she gets little support when she first ‘sends a report and says “This guy has created an account …”’ Brockwell points out that ‘it’s IN their rules – you can’t specifically harass people and make an account just to have a go at somebody, but when you do exactly that nothing happens.’
Brockwell believes that a significant part of the problem is that it’s simply not possible to understand just how being repeatedly abused online by a large number of people affects your life until you’ve experienced it directly, and that goes for many of the founders and managers at these social media outlets. ‘They keep guessing, and that’s why they keep messing up. They don’t know … if they actually spoke to some people, and got some people in who had experienced this stuff, and knew what it was like –and I’m sure any woman online would be able to help them out with that – they might actually make some difference.’
Another potential solution is simply to remove the ability for individuals to be anonymous – you could still maintain anonymity to the public, to prevent stalking, but the social media platform itself could bind accounts to specific people and restrict them to a limited number of accounts. However, Brockwell sees such a situation as unlikely. ‘They want people to make tonnes of accounts because it looks good for their user numbers – they’re not going to do that, basically.’
Of course, it’s important to protect freedom of speech on the Internet, but there’s a difference between stating an opinion and dishing out threats and harassment – activities that would result in restraining orders outside the online world, but appear to be acceptable online.
Social media platforms are also financially reliant on the communities they foster, and they should have a responsibility to protect the individuals within that community. Professor Phil Reed likens it to a feudal society. ‘The lords and barons controlled the world, but there was a phrase that went along with that called “Noblesse Oblige”, which came with some responsibilities.
‘Internet companies, broadly defined, have changed our world. But I see no sign of them taking on the responsibility that comes with that power,’ he concludes.