In recent years, there has been a large rise in censorship on social media, leading to debates about how essential it truly is. Content moderation needs to be revised by social media companies, because oftentimes, it leads to more negative circumstances than positive. While censorship can restrict misinformation, it prevents free speech and posts containing controversial opinions are typically wrongly censored.
Social media companies often operate on vague and unclear censorship guidelines. For instance, X, formerly known as Twitter, states that they do not remove potentially offensive content, but that “targeted abuse or harassment may constitute a violation of the X Rules and Terms of Service.” The problem with these guidelines is that there is no definition of what targeted abuse may really look like. Companies need to explain how exactly they moderate content and censor posts because users can not be expected to know what is and isn’t acceptable.
Along with the issue of unclear policies, having something online makes it harder to interpret the meaning behind a post, leading to more content being censored.
“[The tone] depends on what you say and how you say it, but when you read something, it doesn’t always convey tone,” Lambert English teacher Mr. VanTreek explained.
Online text can be read in different ways. A post could be read as rude or insulting when it was meant to be sarcastic and light-hearted, so without knowing the intended tone, it becomes hard to ascertain the purpose of a post. Due to such a wide scope of interpretation, online content can be flagged even if it was not meant to be offensive.
Additionally, social media companies often use machine-learning algorithms and fact-checkers to censor content. Machine censors often lead to unfair censorship as artificial intelligence may not be able to distinguish between controversial opinions and posts that truly deserve to be censored.
“It’s really hard to track and trace everything that’s going to go on a social platform,” Mr. Vantreek noted.
If a post is factually incorrect, there is a viable reason to remove it but there is a difference between fact checking data-based content and opinionated posts. Artificial intelligence often attempts to use its algorithms to fact-check posts that only have controversial or unpopular opinions, leading to posts being censored when they shouldn’t.
While it is important to moderate content that intentionally provides misleading information to its users, there is no reason to cut controversial content from the media.
Social media is one the main sources from which most of the public receive news. However, due to the sensitivity of controversial topics and certain current events, content that delivers important information is often censored.
For example, journalist Motaz Azaiza’s Instagram account was suspended after she had posted scenes of destruction and the mass death during the beginning of the war in Gaza. Meta provided her with no explanation or information on why the account was suspended. Instagram’s community guidelines state that graphic violence is not allowed unless it is “shared in relation to important and newsworthy events and this imagery is shared to condemn or raise awareness and educate.” Azaiza’s post was justified in its perceived demonstration of graphic violence, since it was posted in regards to the Israeli-Hamas War, with the intent to spread awareness of the conditions in Gaza. This made the post newsworthy and compliant with the community guidelines, so Instagram should have had no reason to suspend the account. Situations such as Azaiza’s show that unfair censorship can result in the public being less informed about current affairs around the world. By restricting information, it can essentially shape the opinions of users without giving them the full story.
Overall, censorship must be clearly defined. While it is important in preventing the spread of misinformation, important information usually becomes limited and taken away from public view. Social media companies need to improve on creating more specific and fair guidelines, as otherwise they result in confusion and allow for bias in content moderation. There is a need for transparency—people should know why and how content is censored with a clear distinction between controversial opinion and truly offensive posts. Without fair and distinct policies, social media platforms may turn into a tool for suppression rather than a beacon of awareness and free speech.