Wonder and Woe: The Challenges of Internet
U-M School of Information professor Cliff Lampe explores bad internet behaviors in a free workshop, How to Become an Internet Troll, in accompaniment to UMS’s presentation of The Believers Are But Brothers.
The internet is filled with wonders. Really. It is filled with beauty. Tools like social media have helped people find love, reconnect with long-lost relatives, and maintain distant relationships. People have found others like themselves when their physical neighbors had excluded them, have built massive works of collaborative art, and have learned about people and places outside of their immediate experiences.
However, the internet is also filled with horrors. It is filled with monsters. Computer tools can’t differentiate whether a person is finding someone with the same medical problem to share emotional support, or whether a person is finding someone with a shared hatred of a group of people. Interactions on the internet take place within an architecture where several different specific design features “afford” a variety of actions. For example, the feature that allows you to share your photo on a social media site affords control over how you express your identity. You can show your face, or share a picture that is intended to deceive others. These features of computer-mediated communication mean that we have new opportunities for benefits, as well as harms, that happen via online interactions.
There is a dizzying array of bad behaviors that happen online, usually with colorful labels that only the internet could generate. Trolling, flaming, brigading, spamming, redpilling, doxxing, and more are all bad behaviors in which people engage. Some of these have been with us since the beginning of online social interaction. For example, “trolling” is saying something (usually deceptively naïve or aggressive) to elicit angry responses from an audience. The term itself relates to the fishing method — not the mythical creature — and the behavior has been around since the early 1980s, when Usenet was a primary mode by which people interacted in online communities.
In my work, I typically break adversarial online interactions into two main categories: those that target individuals, and those that target a group. The bad behaviors targeted at a specific individual can be devastating. Cyber-bullying has caused emotional distress, trauma, and death in adolescents. Women and people of color have been especially vulnerable to threats and intimidation from online harassers — in the same way they are more likely targets of harassment in every context. Actor Leslie Jones had to leave social media after a coordinated effort was made to harass her on Twitter. This type of coordinated action is known as “brigading,” where many harassers plan an assault on a person, using multiple channels and multiple types of attack. One common attack that harassers use online is “doxxing.” This is where documents ranging from home addresses and phone numbers to financial records and personal intimate photos are obtained both legally and illegally and shared with a broad audience. There are hundreds of variations of targeted harassment like this. While it is tempting to blame this type of attack on a small group of bad actors or “trolls,” the research has shown that almost anyone can become a harasser online. When triggered to anger, people often lash out, and that lashing out often becomes some form of harassment.
Adversarial online interactions that target groups are just as harmful as individual attacks, but the goals are often very different. Where an individual may be harassed for revenge, to prove a point, or to signal a virtue, group harassment often has a more specific goal in mind. A familiar example is how ISIS used social media to recruit sympathizers and convert them into active supporters. There, the message was sent to a large audience with the anticipation that most people would be hostile to their goals. But they weren’t trying to win over most people, they were trying to speak to a few folks who harbored similar resentments and fears, and to catch them in the net. This strategy is also common among hate groups in the US. They use social media to plan, create, and launch sophisticated recruitment campaigns. Whether the group’s goals be around misogyny, white nationalism, or religious extremism, the methods remain the same. Creating content that mocks the opposition forms strong group affinity in sympathizers, and establishes a trail of media sites that lead to even more extreme beliefs. This process is known as “redpilling,” named after the scene in the Matrix where the protagonist takes the red pill to learn the harsh truth about a false reality. It’s really just radicalization that takes advantage of the features of social media that hide identity, allow for creativity, and avoid suppression.
Another attack against groups is in the misinformation and disinformation campaigns currently seen surrounding global elections. Different groups that share the goals of disrupting free and fair democratic elections are using online tools to create false identities, news sources, and online groups with the goal of sowing dissension and getting us to question the nature of a shared truth.
Most of these behaviors are not new. They have been occurring in online spaces for decades — and with humans broadly — for thousands of years. What’s new is how important mediated interactions have become for us as a whole, and how unprepared we are for people who break the rules using features of online environments. However, I still think the juice is worth the squeeze when it comes to the internet. If we work on solving these problems of adversarial interactions, we can increase the wonders we experience. We will never entirely get rid of adversarial interactions, but we can support people who suffer from them and do our best to improve the internet overall.
Cliff Lampe is a professor in the U-M School of Information. His research is on how computing environments interact with social processes. For that work, he’s looked at how social motivations affect participation in online communities like Wikipedia, the psychosocial value people get from social media platforms like Facebook, and how features can be used to regulate social behavior on sites like Reddit. While much of his work has focused on the positive aspects of online interaction, recently he has been studying how the features of online systems propel hate speech, disinformation, partisanship, and harassment. He publishes in the fields of computer science and communication.
Glossary of Internet Slang Terms
4chan / An online chat room from which many popular memes emerge
Cuck / A term popular on the alt-right corners of the internet used to describe a man who is weak, effeminate, or submissive
Dabiq / an online magazine used by the Islamic State of Iraq and the Levant for Islamic radicalization and recruitment
Doge / A comically misspelled word for “dog” associated with photos of a dog that went viral in 2010
Doxxing / Searching for or publishing private material about another person on the internet with malicious intent
Gamergate / A 2014 harassment campaign that targeted sexism in video game culture, through which 4chan came to the attention of the mainstream media
KEK / A picture of an ancient Egyptian god with a frog’s head, which was dubbed the god of chaos on 4chan
Pepe / An anthropomorphic cartoon frog popular in memes which has become associated with the alt-right movement
Red Pill / A metaphor emerging from the 1999 movie The Matrix, in which the red pill represents the harsh truths of reality
Troll / A person who instigates quarrels on the internet by posting inflammatory or digressive statements, content, or material