Source: Tom Parker
He also wished the company had censored some content sooner.
If you’re tired of censorship, cancel culture, and the erosion of civil liberties subscribe to Reclaim The Net.
In a far-reaching November 2020 interview, Twitter’s new CEO Parag Agrawal, who was the company’s Chief Technology Officer (CTO) at the time, rejected free speech protections that are enshrined in the First Amendment of the US Constitution, wished the company had censored QAnon sooner and touted the company’s approach of censoring content based on “potential for harm.”
“Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation,” Agrawal said in response to a question about protecting free speech as a core value and the role of the First Amendment.
He added that the company now focuses “less on thinking about free speech, but thinking about how the times have changed.” In this context, Agrawal said the role of Twitter is increasingly moving toward recommendations, and “how we direct people’s attention is leading to a healthy public conversation that is most participatory.”
Agrawal also noted that Twitter focuses its censorship efforts on avoiding “specific harm that misleading information can cause” and claimed that when it comes to COVID-19, “a few people being misinformed can lead to implications on everyone.”
Additionally, Agrawal addressed Twitter’s censorship of QAnon in July 2020 by wishing QAnon content had been purged from the platform earlier and touting Twitter’s actions that had “led to a very rapid decrease in the amount of reach QAnon and related content got on the platform by over 50%.”
Not only did Agrawal support censorship during the interview but he also welcomed Twitter’s approach of relying on “credible sources” when the company deems there to be “potential for harm” associated with the content.
“I think in some cases you rely on credible sources to provide that context,” Agrawal said when asked how Twitter determines whether something is harmful without trying to figure out whether it’s true. “So you don’t always have to determine if something is true or false, but if there’s potential for harm, we choose not to flag something as true or false, but we choose to add a link to credible sources, or to the additional conversation around that topic, to provide people context around the piece of content so that they can be better informed, even as this data for understanding and knowledge is evolving.”
In another July 2020 interview, Agrawal said that Twitter’s approach to censoring “misinformation” puts the company “in a situation where things move slower than most of us would like.”
He added: “It takes us a while to develop a process to scale, to have automation to enforce the policy. I’m not proud that we missed a large amount of misinformation even where we have a policy because we haven’t been able to build these automated systems.”