By: Yanet Mengistie
Published on September 30, 2022.
Social media has been one of the most popular social forces that enables quick and convenient engagement over the past decade. Websites like Twitter, Reddit, and Instagram have gone from a social networking pastime to an active part of everyday news and entertainment for most users. Social media, as a relatively new concept, gained a massive amount of influence and left creators scrambling to keep up with follower count and social trends as a whole.
One common issue that many social media platforms struggle to control is the toxicity and hate speech that surrounds relevant topics. Online trolling and hate campaigns that target both verified and unverified users have become normalized. Extreme negativity has become an inevitable part of the online world, and toxic posts are overlooked and viewed as trolling.
Overlooking the issue has been the approach of creators of the platforms as well. As these issues are overlooked, it gives more room for online bullying to fester, forming a culture of hate that enacts the social discrimination we see in real life, only on a widespread public scale. The concentration of antagonism has been particularly felt by racialized groups.
An example of this lies within Leah Sava Jeffries. Jeffries is a young Black teen actress who was assigned the role of Annabeth Chase in the new television adaptation of the book series Percy Jackson on Disney+.
This character is a young White girl with blonde hair and gray eyes, which are very different from the actress’s own features. When the casting news was released, Jeffries received hate from people who thought a White actor should play the role. This negativity overwhelmed Jeffries’ TikTok account, causing the app to ban it. Previous forms of social media, such as Myspace, did not spark this kind of racially motivated backlash that attacked a person’s job. Thus, the power today’s social media has to welcome acceptance for hate can be dangerous.
What is online hate culture?
Australia’s eSafety Commissioner, an organization working on behalf of the government to research the most effective methods of online safety, defined “online hate” as “hateful posts about a person or group based on their race, religion, ethnicity, sexual orientation, disability or gender.”
Online hate is not just a critique of a person’s opinions or actions online. Rather, it is the use of part of one’s identity, one that cannot be changed, as a part of one’s critique of another person’s opinion or actions online. Online hate culture creates groups on social media sites that spread hatred towards marginalized users.
Some may think these hateful groups of people are on the fringe of social media sites, but this is untrue. The eSafety Commissioner found that Indigenous peoples and members of the LGBTQ2S+ community have more than double the national average of hate speech directed at them.
This is a problem because social media algorithms connect hateful individuals to each other and redirect them to these groups. These connections happen on sites like Facebook and Instagram. Sites like these want to make the users addicted to them, so they construct algorithms that keep displaying content the users already love to keep the person deeply engaged. This allows bullies to find like-minded people quickly as the algorithm connects them. This can be used for good, as people with similar interests can also find each other. But, the hateful people that become connected is a dangerous consequence of these algorithms.
Channels of hate speech online can allow users to find ways to organize and orchestrate hate in real life. For instance, witnessing someone they agree with online and making racist comments would influence others to join in. Online hate is more than just about a few racist Tweets or comments, it is about the traction these hateful posts can create and their ability to spread and reach large audiences from around the world. These hateful online posts are especially problematic as they are usually targeted towards marginalized individuals, amplifying the hate and discrimination they already experience in real life.
Twitter, the microblogging service that debuted in 2006, is one of the world’s most popular social media sites, and this popularity has created a large portion of hate culture. Twitter has become a breeding ground for unwarranted criticism and hostility because it allows users to create fake accounts without revealing their real identities.
Twitter is an app that allows users to share exciting, disappointing, or random updates with their friends. It also allows users to express their opinions on prevalent topics. However, the option to create an anonymous account gives users the opportunity to vocalize negative thoughts in disguise, creating a toxic environment without repercussions. If a fake account is flagged or reported, the user can easily create another one to continue the cycle.
Twitter has a hateful conduct policy which states that users cannot promote hate speech targeting a person’s identity, however, the inherent anonymity and lack of monitoring negates the power of the hate policy. The former CEO of Twitter spoke on this, stating that they cannot address “simple trolling” let alone hateful identity-based attacks.
As Twitter’s user base grew into the millions, people now see less of their friends and family and more influential social figures like celebrities, news stations, and politicians. Twitter has been stripped of its original intention of sharing harmless thoughts with friends online. In the words of one user, it has become a “vehicle for sexist and racist harassment,” with Black women being 84 percent more likely to receive abusive tweets.
Online harassment is trickling into the real world by influencing a rise in hate crimes. Using artificial intelligence software, the New York University team looked at over 500 million tweets from 2011 to 2016 from several cities in the United States. In cities that had a high number of discriminatory tweets, there were higher numbers of hate crimes.
As users continue to like or retweet hateful posts, accounts who encourage hate speech will feel emboldened.
Reddit is a forum-based social media site, founded in 2005, that has become a haven for online hate culture. It allows users to form subreddits where they can post text, pictures, or video leaving room for other users to comment, produce discussions, or start debates.
Within these subreddits, Reddit relies on moderators, also known as mods, to organize individual subreddits and enforce rules that commenters must follow. These mods regulate any source of bullying or harassment, but on occasion, Reddit will intervene and shut down a subreddit if its contents began to violate their Content Policy. This policy states that users or subreddits that encourage “hate based on identity” will be banned.
Many individual subreddits post portions of the content policy in their subreddit description and use it as a warning to commenters looking to post hateful information on the subreddits. However, in 2020 Reddit updated its policy to include hate speech on the list of banned topics due to the influx of hate groups with thousands of members using its site.
In 2013, there was a network of subreddits called the Chimpire, in which the members would exclaim racist rhetoric about Black individuals. In its time, the network had accumulated over 552,829 subreddits. Users would break free from the Reddit policy to avoid punishment with one of the subreddit’s descriptions warning users to avoid mentioning violence to not get caught, while expressing other racist terms and phrases.
Complicating this situation, before 2020, executives had voiced that racism is not explicitly against their rules. Instead, they believed that racism, while not condonable, should be combatted through dialogue between Reddit users.
In 2018, Reddit CEO Steve Huffman said that the “best defense against racism and other repugnant views, both on Reddit and in the world, is instead of trying to control what people can and cannot say through rules, is to repudiate these views in a free conversation.” This was a dangerous conclusion as the website had already amassed 552,829 subreddits full of racist posts that had been left unchallenged, allowing like-minded hateful people the opportunity to congregate and spread their discourse.
Social media’s acceptance of hate speech should be addressed, especially as it targets minorities and racialized groups. The platform’s creators need to avoid blaming the faults of their content on trolls or individual radicals and instead take action against online hate systemically. If they do not, online hate could cause people in reality to get hurt.
Yanet Mengistie is an experienced Writer, Researcher and Creative who is ready to hit the ground running with Black Voice. Driven by having previously worked as a Content Writer for a company that sought to uplift small businesses in Northern Canada, she takes joy in using her writing to uplift small or marginalized voices. As a Writer with Black Voice, her goal is to combine this passion for small businesses with this publication's mission of empowering Black individuals across Canada. Yanet is committed to ending the marginalization of Black Canadian perspectives and opinions. She hopes to bring Black excellence, concerns or hot topics to the forefront through her work with Black Voice.