Posted BY: OAN’s Brooke Mallory
A proposed measure by Democrat Senator Michael Bennet would provide a new federal agency the authority to establish a council that would define “enforceable behavioral codes” for AI and social media platforms.
The council would also have “disinformation” specialists on its panel.
The legislation, which would establish a federal organization tasked with monitoring public discourse 24/7 in order to combat “hate speech” and “misinformation,” has been submitted to the Senate.
The Digital Platform Commission Act (DPCA) was proposed last week by the Colorado senator, with support from Peter Welch (D-Vt.). It would develop a de facto “Ministry of Truth” called the Federal Digital Platform Commission.
According to Bennet, his proposed bill would also “create an expert federal body empowered to provide comprehensive, sector-specific regulation of digital platforms to protect consumers, promote competition, and defend the public interest.”
According to the proposed plan, the agency would have five commissioners who would be chosen by President Joe Biden and then approved by the Senate.
The Federal Digital Platform Commission would reportedly have the authority to inflict civil fines on anyone who departs from the predetermined narratives of whoever is in charge and running things at the time, which would currently be Democrat officials.
Trending: Let’s Call Them What They Are: Communists
“The Commission would have the authority to promulgate rules, impose civil penalties, hold hearings, conduct investigations, and support research. It could also designate ‘systemically important digital platforms’ subject to additional oversight, regulation, and merger review,” Bennet’s statement read.
However, the bill would clearly violate the First Amendment, according to civil rights lawyer Harmeet Dillon. “Unconstitutional, also evil and stupid,” she tweeted on Friday.
This is unconstitutional, also evil and stupid. https://t.co/zmaEvMLbvL— Harmeet K. Dhillon (@pnjaban) May 19, 2023
The Foreign Malign Influence Center (FMIC), which would focus on “foreign malign influence” aimed at U.S. elections, as well as “public opinion within the United States,” was created after the Office of the Director of National Intelligence announced the establishment of the DCPA.
The FMIC, which the “Twitter Files” showed functions as a nefarious censoring arm of the federal government, will also collaborate with the State Department’s Global Engagement Center (GEC), according to DNI Director Avril Haines, in order to carry out its information dominance objective.
A year ago, the Department of Homeland Security disbanded its “Disinformation Governance Board” (DGB) in the wake of a significant outcry from the general public.
According to internal government records, the DGB was entrusted with identifying and suppressing speech that posed “serious homeland security risks,” such as what they deemed “conspiracy theories” regarding election fraud, COVID-19 vaccine effects, and “the efficacy of masks.”
In his statement, Bennet defended the call for such control by charging that “technology” was harming children and undermining democracy while operating unchecked.
Bennet did not seem to have much faith in the U.S. Congress, where he is a current member. He asserted that technology is progressing and evolving in ways and at a rate that Congress cannot keep up with. Additionally, he did not seem to believe that the Federal Trade Commission or the Department of Justice are up to the task since they lack the necessary resources and qualified employees to offer “robust and sustained” regulation of the digital platform industry, according to his statement.
Protecting consumers against “addictive design features or harmful algorithmic processes” is one of the goals mentioned in the press release.
Bennet continued on by saying that all animals are equal under his and Welch’s proposal, though some are, in a sense, more equal. As a result, the Commission would be empowered to identify certain digital platforms as having systemic importance and to subject them to further scrutiny and regulation, such as audits and the “explainability” of algorithmic decisions.
The law proposes creating a Code Council as a member of the Commission, whose role would be to develop voluntary or enforceable behavioral codes, technological standards, or other policies, such as accountability and transparency for algorithmic processes.
The Council would reportedly be comprised of 18 members, including representatives from digital platforms or their associations (three of whom must come from platforms considered to be “systemically important”), non-profit organizations, academics, and subject matter experts with expertise in disinformation, consumer protection, privacy, competition, and technology policy.
This jumble of at-odds ideas and problems that make up the Council’s definition also appears in the bill’s introduction, where they make a great effort to explain why the Commission must be created.
Some of the alleged issues it is intended to address include undermining small businesses, aiding in the destruction of “trusted” local journalism, facilitating addiction, disseminating hate speech, undermining privacy, monetizing personal data, radicalizing people, and expressing any form of racism or misogyny.