“They’ve created a platform to allow this hate to be spewed.”
Posted BY; Sputnik
On Saturday, Buffalo, New York, was shocked by a mass shooting unprecedented in brutality. According to the latest data, at least 10 people were killed in the incident, three were injured. Most of the 18-year-old shooter’s victims are Black, prompting law enforcement to investigate the incident as a hate crime.
The hell, unleashed by a man in tactical gear at a Tops Friendly Markets supermarket in Buffalo, has indeed struck the nation in what law enforcement officials have deemed a racially-motivated mass shooting.
A total of 13 people were shot in a short amount of time, and ten of them died as a result of the attack. Officials say 11 of those shot were Black and two were white. After firing multiple rounds, Payton Gendron, the white male who was identified as the suspect in the massacre, surrendered to responding officers after they were able to convince him to not shoot himself, authorities have revealed.
The suspect was later arraigned on a first-degree murder charge, which is punishable by up to life in prison without parole, according to state laws. He has since submitted a plea of not guilty and will be due in court on May 19. Gendron was fully armed during the shooting, allegedly carrying an assault rifle, wearing tactical gear, and carrying a camera he used to Livestream the shooting. More to that, the suspect has purportedly prepared a manifesto posted online in connection with the mass shooting.
The fact that the massacre was broadcast live on Twitch, a platform very popular with game streamers, again brought up the sore subject of Section 230 of the Communications Decency Act and whether social media platforms must be held responsible for what users post on them.
NY Governor: Platforms at Least ‘Morally’ Responsible for Content & Should Monitor Users
Speaking at a news conference later on Saturday, the New York Governor Kathy Hochul emphasized that in light of the circumstances surrounding the tragedy, the shooting could have possibly been avoided if social media companies did a better job of moderating content, limiting or outright banning the sharing of conspiracy theories and racially inciting material.
Hochul pointed out that tech companies “that profit from their existence need to be responsible for monitoring and having surveillance, knowing that they can be in a sense an accomplice to a crime like this.” However, she admitted that while the liability might not be legal under the existing legislation, firms should be held “morally.”
“They’ve created a platform to allow this hate to be spewed and others who are like-minded or could be radicalized, people who may not have intended to go down the path but they read this – it’s pervasive, it’s among the big lies that are out there and we know how insidious it is. We’ve seen the effects,” she said. “But particularly the act of live streaming this, the fact that that can even be hosted on a platform is absolutely shocking and we need to find out what happened and how to make sure it doesn’t happen again.
“Furthermore, when asked whether she believed the law enforcement agencies should have probed the social media footprint of the suspect prior to the killings, the governor said that “everyone involved” should have monitored the shooter’s posts on social media in order to unfold his intentions in advance.”
Absolutely, these should be monitored by everyone involved – the platform and law enforcement monitors. … And this is all part of the inquiry,” Hochul replied, speaking about the investigation aimed at establishing the motives of what happened.
‘All People With Internet Could Watch and Record’
Streaming company Twitch confirmed the suspect utilized its services to hold a live broadcast during the attack. The shooter has been suspended from the platform “indefinitely,” according to a company statement, cited by the New York Times.
Twitch said it took the channel down two minutes after the violence began on the screen. Only “currently unavailable due to a violation of Twitch’s community guidelines or terms of service” was reportedly written on the channel’s page afterward. According to the platform, it “has a zero-tolerance policy against the violence of any kind and works swiftly to respond to all incidents. The user has been indefinitely suspended from our service, and we are taking all appropriate action, including monitoring for any accounts rebroadcasting this content.”However, despite the takedown, graphic screenshots from the broadcast are still circulating online, including one that appears to show the gunman standing over a body in the grocery shop, holding a rifle.
Moreover, according to reports, the shooter stated in a manifesto posted on the 4chan website that he would “Livestream the attack on Twitch,” which he chose since “all people with the internet could watch and record,” in an apparent bid to gain as much publicity as possible. He also allegedly pointed out that in 2019, a shooting at a Jewish synagogue in Halle, Germany, was also live-streamed on Twitch.
More to the shooter’s visibility before the shooting is that his other social media posts purportedly indicated a set of instructions created by the gunman for himself on the messaging platform Discord, including “continue writing manifesto” and “test Livestream function before the actual attack.”Discord, whose algorithms reportedly rarely automatically track hate speeches and threats, also denied not doing enough in their work with user content as the shooter’s posts were allegedly hosted on his personal server in the application and were thus available only to his followers.
Section 230 & Legal Liability for Users’ Content
In general, the Communications Decency Act of 1996, Section 230, states that an “interactive computer service” cannot be considered the publisher or speaker of third-party content. Although there are exceptions for copyright breaches, sex work-related material, and violations of federal criminal law, the measure shields websites from lawsuits if a user posts something illegal – even if this material is practically protected by the US Constitution. But the rules on user behavior and guidelines that many social media companies employ are also protected by the constitution.
Thus, despite the fact that it is protected by the Constitution, Section 230 provisions insulate computer companies from lawsuits based on user statements while allowing them to censor information that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”
In recent years, Section 230 protections have been scrutinized more closely on topics such as hate speech and ideological biases in particular in relation to the influence internet corporations can have on political debates, and have become a prominent issue during the 2020 US presidential election. The Stop Enabling Sex Traffickers Act (FOSTA-SESTA) revised Section 230 in 2018 to demand the removal of material that violates federal and state sex trafficking laws.
Back in 2020, then-President Donald Trump attempted to seriously restrict the section’s application to social media by executive order, prompting agencies to collect data on political censorship on such platforms.
Under the order, authorities had to narrow the definition of Section 230, bypassing Congress and the courts, while also pressing agencies to gather political prejudice complaints that may be used to justify withdrawing sites’ legal protections. And after his defeat in the presidential race in November 2020, Trump went even further, threatening to veto the National Defense Authorization Act unless it includes a repeal of Section 230 and incorporates it into the bill.
His successor at the post, Democrat President Joe Biden was also a vocal opponent of Section 230 when it came to social media. Biden advocated totally repealing Section 230 in January 2020 but mostly stopped addressing this topic after he took office. Only in 2022 did he approve the creation of a panel on disinformation in the media at the Department of Homeland Security.
Can We Expect Section 230 Changes Following Rising in Livestreamed Crimes?
The question seems to persist on the agenda and each time it acquires new relevance in the light of egregious materials distributed by terrorists or murderers online on the eve or during the crimes being committed.
However, are companies technically able to almost flawlessly comb the materials of thousands and even millions of users for hate speech or threats to life remains debatable? In 2020, presidential candidate Beto O’Rourke proposed changes to section 230 related to the surge in armed violence in the country. O’Rourke outlined a plan to hold social media firms like Facebook, Twitter, and YouTube more accountable for abusive speech on their platforms, in addition to proposing a national gun licensing system and implementing universal background checks. He suggested changing Section 230 to “remove legal immunity from lawsuits for large social media platforms” that refuse to be proactive in eliminating hate speech and terrorist content.