Trolling in online conversations is a significant problem.
A recent Pew Internet survey suggests that 4 in 10 Americans have experienced online harassment in some form, with many in the public looking to platform owners to better police their platforms as the most effective way to address online harassment.
However, trolling is challenging to study because it is difficult to define precisely. Trolling is an overloaded term, with multiple definitions ranging from the more specific (e.g., acting undercover to undermine a position), to the broader (e.g., behavior undesirable to a community).
What is trolling?
Each individual might have a different perspective and notion of what trolling means. We all don’t necessarily have to agree on which are the “right” definitions, yet it is worth to expose the faceted nature of this phenomenon.
At the very broadest level, online negative behavior encompasses a very broad range of negative activity, including trolling, harassment, flaming, and (cyber)bullying. Such behavior comprises a substantial fraction of user activity on many web sites (including this one), with four in ten Americans experienced online harassment according to a Pew Internet Survey conducted last year.
Negative behavior relates to the psychological concept of aggression, or hostile behavior towards others. Aggression can be verbal (i.e., an insult), physical (i.e., a punch), or relational (i.e., manipulating social standing); it can be reactive (i.e., in response to provocation) or proactive (i.e., used to achieve a goal).
But perhaps the earliest work on trolling started with analyses of Usenet newsgroups and discussion forums, and emphasized intent and deception as essential to the definition of trolling. Trolls were defined as those who “disrupt[ed] a group while remaining undercover”, “lur[ed] others into pointless and time-consuming discussions”, or “[took] pleasure in upsetting others”.
These definitions were motivated by studies of notable individual trolls including “Ultimatego”, who trolled other users in a wedding-related Usenet newsgroup by being condescending, “Macho Joe”, a troll that attacked the Melrose Place newsgroup by posting homophobic content, and “Kent”, a hostile male participant who attacked members of a feminist forum.
More recently though, definitions of trolling have substantially broadened, perhaps owing to increased interest, awareness, and broader mainstream use. Trolling has been defined as behavior having “harmful and disruptive effects”, “engaging in negatively marked online behavior”, “not following the rules”, or where one “makes trouble” for a discussion forums’ stakeholders.
Trolling can be broadly defined as behavior that “falls outside acceptable bounds defined by [a] community”. I find this a nice default because it means that we, as researchers or observers, are making minimal value judgments as to what trolling is — we simply observe how a community reacts to undesirable individuals! On a practical level, this involves measuring negative signals from a community such as user bans and reports, post hides, or downvotes.
Motivations for trolling
Trolls can differ significantly in their motivations, which likely affects what they end up doing:
- They are attention-seeking, e.g., through being offensive or expressing an extreme POV
- They want to be offensive
- They want to play devil’s advocate
- They want to make a certain position appear more popular than it is
- They want to discourage others from participating
Similarly, there are also different potential dimensions along which we may classify trolls:
- “For fun” vs “With an agenda”
- Individual vs Coordinated
- Paid vs Unpaid
- Cynical vs Earnest
- Anonymous vs Identified
- Political vs Non-political
- Civil vs Not
- Proactive vs Reactive
- Overt vs Covert/Deceptive
Considering two of these axes, individual/coordinated and “for fun”/”with an agenda”, we might want to identify four main types of trolling behavior:
- Individuals trolling for fun: people who enter a conversation to either offend and attack other participants, or post hateful content related to the subject of the post.
- Individuals with an agenda: people with a political agenda trying to move every discussion to their point.
- Coordinated actors trolling for fun: groups who coordinate to attack someone.
- Coordinated actors with an agenda: covert coordination to disseminate an opinion and make it appear more popular than it is.
Is trolling innate?
It is also useful to understand how trolling arises so that we can begin to think about how to manage such behavior on social networking sites.
For instance, is trolling behavior innate, or is it situational? Short answer: it is both.
Nonetheless, a majority of prior work on trolling tends to argue for trolls being born and not made. This is perhaps implicit in the particularly unique case studies examined in early work on online trolling.
Some of this argument may stem from studies of how biological differences affect aggressive tendencies: men have different aggressive styles from women and are more likely to express aggression physically; antisocial and aggressive individuals tend to have lower arousal, and purposefully seek excessive stimulation.
One recent line of work has looked at the relation between trolling and personality traits, and in particular, the dark tetrad traits (narcissism, Machiavellianism, psychopathy, and sadism). In other words, trolls tend to be less empathetic people that find enjoyment in causing others physical and psychological pain.
Is trolling situational?
Is that all there is to trolling? Studies of influence and norms suggest that trolling behavior can spread from person to person. Negative social norms can propagate (e.g., littering), and even in cases where they are undesirable to the perpetuating individuals. Experiments and studies have also shown how negative emotions and behavior can also be transferred from person to person.
Environmental factors can also lead people to become more aggressive. Being exposed to pain or discomfort can also increase aggression: violence increases on days with higher temperatures, as does car honking.
Finally, there’s also an effect of negativity bias — negative events have a greater impact on individuals than positive events of the same intensity. In other words, negative incidents are more likely to be remembered and likely to have an impact on one’s future behavior.
These findings on influence, environment, and negativity bias together point toward a “Broken Windows” hypothesis, i.e., that untended behavior can lead to the breakdown of a community.
What does all that mean?
Putting these two points of view together, we can say that most trolling may be more situational than innate. In other words, not only can negative mood and the surrounding discussion context prompt ordinary users to engage in trolling behavior, but such behavior can also spread from person to person in discussions and persist across them to spread further in a community”. Recent work also shows how discussions can be positively influenced: changing comment ranking to show higher quality comments results in better responses from both high- and low-quality commenters.
Still, this isn’t to say that there is no need to specifically target the “worst offenders” (i.e., the psychopathic, sadistic trolls) just because anyone can troll. In fact, there is a need to find these trolls exactly because they are likely to have an outsized impact on a community by causing cascades of trolling to occur. Instead, what these findings also suggest is that we should also consider changes to the design of commenting systems because anyone can become a troll.