Facebook’s Vice President of Global Policy Management Monika Bickert wrote a company blog “Enforcing Against Manipulated Media” on Jan 6. The blog was written to announce upcoming safeguards to protect against “people who engage in media manipulation in order to mislead.”
“Deepfakes” are part of a new wave of viral misinformation that Bickert wrote “can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality.”
Conversation about the threat of “deepfakes” hit a fever pitch in late May 2019 when a slightly slowed down video of Speaker Nancy Pelosi went viral. A popular tech journalist and Recode’s co-founder Kara Swisher condemned Facebook for allowing the video to spread, saying, “This week, unlike YouTube, Facebook decided to keep up a video deliberately and maliciously doctored to make it appear as if Speaker Nancy Pelosi was drunk or perhaps crazy,” she wrote.
Facebook,after receiving such withering criticism, has decided to adapt its approach. This new set of tactics “has several components, from investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.”
“Collaboration is key,” the blog continued, explaining:
“Across the world, we’ve been driving conversations with more than 50 global experts with technical, policy, media, legal, civic and academic backgrounds to inform our policy development and improve the science of detecting manipulated media.”
The blog elaborated that Facebook will remove “misleading manipulated media” that meets the following criteria:
-”It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
-”It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Thankfully, the blog did try to sooth meme enthusiasts and satirists concerns in noting that “[t]his policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words. “
“Consistent with our existing policies, audio, photos or videos, whether a deepfake or not, will be removed from Facebook if they violate any of our other Community Standards including those governing nudity, graphic violence, voter suppression and hate speech.
Facebook is taking a heavy-handed, and possibly problematic, approach to not only fact-checking but combatting so-called disinformation. But the social media giant is doing so in a way that is more complicated than mere deplatformiing.
The scale of people involved is noteworthy as well: “Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages.”
Further detailing the process, Facebook explained: “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
Facebook noted its rationale of inoculating people against misinformation rather than trying in vain to avoid it altogether:
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we’re providing people with important information and context.”
Facebook has, according to its own record, made some major victories in the cutting edge technological battle against foreign interference. “Just last month, we identified and removed a network using AI-generated photos to conceal their fake accounts,” Wrote Bickert. “Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior.”