Tackling Hate Speech: In Conversation With Caitlin Ring Carlson

A communications and media law expert discusses her research and suggests ways to limit the spread of hate speech.
Neo-Nazis prepare to march during the Unite the Right Rally at the University of Virginia in 2017. Source: Wikimedia Commons
By: Zoë Kopp-Weber

Hate speech is ubiquitous and can happen anywhere — in Charlottesville, Virginia, where young men in khakis shouted, “Jews will not replace us”; in Myanmar, where the military used Facebook to target the Muslim Rohingya, calling them dogs and worse; in Capetown, South Africa, where a pastor called on ISIS to rid South Africa of the “homosexual curse.”

Caitlin Ring Carlson is the author of “Hate Speech.”

Yet defining hate speech is an elusive task, and scholars haven’t come to a clear and shared definition of the concept. Broadly, it’s an expression that seeks to malign an individual for their immutable characteristics, says Caitlin Ring Carlson, an expert in communication and mass media and the author of “Hate Speech,” a recent addition to the MIT Press Essential Knowledge series. “Hate speech represents a structural phenomenon,” Carlson explains in the book’s introduction, “in which those in power use verbal assaults and offensive imagery to maintain their preferred position in the existing social order.” Although legal action and social stigmatization seek to minimize the spread and impact of hate speech, the issue persists both in person and online.

Carlson’s book doesn’t parse specific words and phrases. It instead investigates legal approaches and controversies around the world in order to provide suggestions for limiting its spread. These solutions may be applied from the highest point of order, by governments and media organizations, to the individual and personal level. In short, Carlson writes, “logging off is not the answer.” Widespread change, specifically on social media platforms, requires vigilance from users, regulators, and institutions.


Zoë Kopp-Weber: You open your book highlighting the ubiquity of hate speech, yet despite the problems it causes, the term is expansive and often contested. What about the term makes it so hard for scholars to agree upon a clear definition?

Caitlin Ring Carlson: What makes hate speech so difficult to define is the subjective nature of hate speech itself. Phrases, images, and terms that I may see as maligning an individual based on their fixed characteristics, such as race, gender identity, or sexual orientation, may not be seen the same way by others. Intent plays a role as well, and intent is difficult to determine. When slurs are used by members of the group they were originally meant to harm, it is rightly considered a reclamation or reappropriation of the term and, thus, not hate speech since the intent is not to malign someone.

There has not been an incident of genocide in recorded history that was not accompanied by discourse seeking to dehumanize the targeted groups.

ZKW: How do you demonstrate that hate speech is a driving force for issues like bias-motivated violence and genocide?

CRC: I don’t think it’s possible to empirically demonstrate that hate speech causes bias-motivated violence. However, a historical analysis of genocide and bias-motivated violence clearly illuminates the relationship between hate speech and these atrocities. There has not been an incident of genocide in recorded history that was not accompanied by discourse seeking to dehumanize the targeted groups. Thus, hate speech creates the ideological conditions for people to act out against members of another ethnic or religious group, for example.

ZKW: How has the U.S.’s position on freedom of expression informed legal responses to hate speech, and how does this compare to the way other countries approach the issue?

CRC: In the United States, we tend to place the right to free expression above other rights. We consider the harm caused by hate speech to be less costly to society than the harm associated with restrictions on our right to free expression, particularly as it relates to political dissent. Only when hate speech crosses the line and becomes a true threat or incitement to violence can it be punished. Interestingly, we have other categories of speech that are exempt from First Amendment protection, such as obscene speech or speech that is injurious to another’s reputation. Hate speech is just not one of those categories.

This approach is vastly different from most other Western democracies, which prohibit hate speech and punish it with fines or jail time. From Canada to the European Union, several countries have laws against expression that incites hatred based on a person’s race, gender, ethnicity, religion, etc. Citizens of these countries tend to place the right to human dignity over the right to free expression.

ZKW: Given its history, Germany’s stringent laws restricting hate speech are not surprising, even being the international leader in shaping standards for online communication and content. What of their efforts, if any, would be the greatest takeaway for countries like the U.S. in their individual regulation of hate speech?

CRC: While the German law NetzDG, which requires social media platforms to remove illegal hate speech quickly or risk substantial fines, is not perfect, there are several lessons we can take from this approach. First is transparency. Part of this law requires large social media companies to create and disseminate reports regarding which content and accounts were removed and why. In addition, this approach serves as a reminder that regulation can and perhaps should be used to motivate social media and other computer services to act not only in the best interest of their shareholders but also in the interest of the public.

ZKW: In addition to being hurtful and a foundation for greater, more physical threats, how else does hate speech create personal, even political barriers?

CRC: Several scholars, including Danielle Citron and Helen Norton, have argued that the proliferation of hate speech, particularly online, makes it difficult for those targeted to engage in the political process. For example, let’s say there’s a discussion happening on a neighborhood Facebook page about a new City Council ordinance to reduce police funding. It’s easy to imagine how, after posting her opinion, a Muslim woman might be met with a barrage of hate speech calling her names and encouraging her to “go back to her country.” To protect herself from this abuse, the woman leaves the discussion. A week later, when a spokesperson from the neighborhood is invited to speak at a City Council meeting, the Muslim woman’s perspective on the issue is not included or represented in the testimony because she was driven from the page by the vitriolic hate speech she encountered. In the future, the Muslim woman may be far less likely to engage in any online civic discourse for fear of similar attacks.

In terms of personal barriers, Mari Matsuda has, for decades, warned us about what she sees as the most significant potential harm caused by hate speech, which is that those targeted will come to believe in their own inferiority. If children are raised in a world where the public discourse tells them they’re subhuman because they are Black, transgender, or Jewish, they may come to believe that they are less worthy of dignity than other people.

Caitlin Carlson recently hosted an “Ask Me Anything” (AMA) session at Reddit, where she and researcher Luc Cousineau took questions on content moderation, men’s rights groups, hateful subreddits, and more. Read the discussion here.

ZKW: Historically, bias-motivated violence and political dissent have both flourished on college campuses. What complicates higher education institutions’ ability to address hate speech?

CRC: It is difficult for colleges and universities to address hate speech because of the tension between their dual goals of being places where new ideas are considered and places where people live and work. For centuries, students at universities have been asked to wrestle with concepts they disagree with in order to form their own opinions and, eventually, their broader worldview. Professors have been given academic freedom and tenure to explore alternative perspectives, test hypotheses, and speak out on critical public issues without interference from administrators. In so many ways, free expression is integral to higher education.

However, problems arise when that expression, whether from faculty or outside speakers, threatens the physical and emotional safety of students who in many instances are a captive audience that cannot simply “look away” when a professor or speaker uses an offensive slur or claims one race or gender is inferior to another. Therefore, colleges and universities must engage in the hard work of finding a balance between exposure to new ideas and creating a community where people feel safe and supported enough to engage with those ideas.

ZKW: Greg Lukianoff and Jonathan Haidt, the authors of “The Coddling of the American Mind,” argue that higher education has incorrectly taught students that they are fragile, emotional beings, creating a culture of extreme safety that leads to intimidation and violence. What do such claims miss about the existence of trigger warnings and safe spaces?

CRC: What’s missing from the argument in “The Coddling of the American Mind” is the students’ perspective, particularly students with historically marginalized identities. In the book, I include a great quote from Mary Blair, a Black woman who was a student at the University of Chicago. CBS News interviewed her and several of her fellow students. In response to another student’s comment about the real world not being a safe space, she said, “I can assure you, all people of color who have existed in a white space know that the real world is not a safe space.”

What’s missing from the argument in “The Coddling of the American Mind” is the students’ perspective, particularly students with historically marginalized identities.

In my experience, students are not “fragile, emotional beings” and instead are mindful, empathetic people who want to be able to engage with controversial ideas in a meaningful and productive way. Along those lines, there is a fundamental misunderstanding regarding the term “safe spaces.” These are not intellectual safe spaces, but rather an environment where everyone feels comfortable in expressing themselves and participating fully without fear of attack, ridicule, or denial of experience. No one is suggesting that students in the classroom avoid or ignore ideas they disagree with. Instead, these tools allow students to engage with these concepts in a respectful and constructive way.

Finally, content or trigger warnings are simply tools that some instructors use to let students know that a sensitive topic or issue is about to be discussed so that students are not caught off guard. Rather than avoiding certain topics, such as sexual assault or bias-motivated violence, altogether, content warnings allow professors to communicate with students about the nature of the upcoming material.

ZKW: Facebook has been a significant subject in conversations regarding the proliferation of hate speech through social media. What responsibility does social media legally have to address these issues; how could we see these responsibilities change in the coming years?

CRC: Legally, social media has no responsibility to address these issues. As private virtual spaces, social media platforms are free to create whatever community standards they want for their platforms. As users, we agree to these rules when we sign the terms of service that allow us to access the site.

From an ethical perspective, social media have an essential role to play in decreasing hate speech in public discourse. However, as publicly traded companies, social media organizations’ first responsibility is often to their shareholders. It seems unrealistic to think that they will take any action detrimental to their bottom line. If, as I suspect it does, hate speech and other offensive content leads to greater engagement on the platform, it is unlikely that these companies will act differently unless users or advertisers demand it or the government steps in to regulate it.

ZKW: As social media organizations seek to more aggressively remove hate speech from their platforms, what ways can content moderation be improved algorithmically and logistically?

CRC: The algorithms and artificial intelligence used by social media companies to remove hate speech from their platforms have improved a great deal in recent years. Natural language processing allows companies to identify and remove all instances of particular words. However, the algorithms still struggle to identify hate speech when the meaning of a comment or post depends on its context. For example, the phrase “go home b*tch” would not be considered hate speech if posted as a comment on a news story about a football team beating the visiting team. However, it would be considered hate speech if posted to a story about Representative Ilhan Omar’s most recent bill in Congress.

In terms of the logistics of content moderation, moving the process in-house, rather than outsourcing it to firms that offer low wages and problematic working conditions would improve human content moderators’ efficacy. Dedicating more resources to identifying and removing hate speech (along with disinformation, harassing speech, and nonconsensual pornography) should be a top priority for social media organizations.

ZKW: What don’t we understand about the phenomenon of hate speech? What should future research focus on?

CRC: Right now, we don’t fully understand the various impacts, big and small, that hate speech has on individuals and on society as a whole. I would love to see future research that unpacks the psychological, emotional, and physiological impacts hate speech has on individuals. For example, how are people influenced by hate speech that is about a group they are a member of compared to hate speech directed at them personally? From a structural perspective, we should investigate the role hate speech plays in establishing and maintaining racial and other forms of discrimination and inequality. Future research should also examine the relationship between hate speech and extremism, particularly online. We need to know how, specifically, hateful rhetoric translates into offline violence and whether there are interventions that have been or could be successful.


Zoë Kopp-Weber is a publicist at the MIT Press.

Caitlin Ring Carlson is Associate Professor in the Communication Department at Seattle University and the author of “Hate Speech.”

Posted on
The MIT Press is a mission-driven, not-for-profit scholarly publisher. Your support helps make it possible for us to create open publishing models and produce books of superior design quality.