The speed and vastness of the internet is something that society struggles to control. When violent acts like terrorist attacks and school shootings occur, the power of smartphone cameras make it possible for videos of violence to appear on social media within seconds, uploaded for the whole world to see and shared by a myriad of news outlets. Some of this content is disturbing to most, and many argue that it could incite others to commit similar acts.
Violent Acts and The Dangers of Extremist Forums
Recently, an anti-semitic open letter was uploaded to a far-right message board known as 8chan. Hours later, a man opened fire at a synagogue in Poway, California near San Diego on the last day of passover. One person was killed and three were injured. After the shooting, John T. Earnest was taken into custody as a suspect. The user who posted the white nationalist open letter on 8chan used the username, “JohnTEarnest”, according toNBC News.
Along with the 8chan note was a link to a Facebook page with a message that read “a livestream will begin shortly” and included a link of songs that the user planned to play during the livestream, according to NBC News.
A month before the Poway synagogue shooting, a shooting occurred at Christchurch Mosque in New Zealand, killing at least 49 people, according toBBC News. The shooting was also live streamed, and posted on social media platforms.
Six month ago (to the day of the Poway shooting), an anti-semitic white nationalist, Robert Bowers, opened fire at a synagogue in Pittsburgh, killing 11 people, according toThe New York Times.
Interestingly enough, these three attacks all have something in common. The shooters used extremist forums, two of whom used 8chan, to post messages of notice, including clear motives and evidence. Even worse, the Poway shooter cited the violent acts committed at Christchurch and the Pittsburgh shooting as motivations, according toThe Atlantic, making it highly plausible that he might be a copycat.
When you consider this information and the influence the internet can have on violence, what is the solution? How and when do these violent acts caught on camera get removed from U.S. websites? Who should be held accountable?
Free Speech and Social Media
Well, according to Section 230 of the Communications Decency Act, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that those who own and manage social media websites are not responsible for the content uploaded to their platforms. This means they can’t be sued if a user posts naked photos of their ex-girlfriend or videos of a violent shooting.
Platforms like Facebook and Youtube have policies that prohibit certain types of gruesome and hateful content from being uploaded, but in reality, these policies only go into effect after the content is already uploaded, and it could take days before it’s even removed.
Major social networks have taken steps to police content in response to public outcry but given the vast population using these platforms, it’s almost impossible to manage.
Free Speech Limitations or Public Protection from Violent Acts?
However, recently, YouTube has been more vigilant about removing suspected “terrorists videos” from their platform this year.Engadgetreports that in the first three months of 2019, Google (who owns Youtube) manually removed more than a million videos that they suspected to be associated with terrorism, 90,000 of which were found to be in violation of YouTube’sTerrorist Policy. While it is noble of Google to take the time and money to manually remove these videos, it also is alarming that over a million videos posted to their platform, are associated or have the potential to be associated with terrorism.
“Google, Facebook, Twitter and Microsoft have been asked to reveal their counter terrorism budgets, but putting a number on that proves to be complex. The “hundreds of millions of dollars” estimate from Google is the closest thing to an answer that we’ve seen so far,” Engadget reports.
It’s clear that social media and the Internet have become a powerful weapon for criminals to promote hate crimes and acts of terrorism. Anti-semitic acts of violence in particular have alarmingly increased in recent years, just take a look at Kanye West’s tweets. In an annual reportreleased by the Anti-Defamation League, the number of anti-semitic physical assaults in 2018 have more than doubled from 2017, and the single deadliest attack against the American Jewish community happened the same year as the Pittsburgh shooting.
Removing violent content, hate-speech, harassment, and defamation, from internet platforms is becoming a growing problem and is very, very difficult. The law has yet to catch-up with the issues plaguing our real-time, social media focused society.