Facebook is set to remove deepfake and manipulated videos if the content is edited, but it will keep parody or satire content, as well as content that has been edited to omit or change word order. Facebook will also remove misleading content or media if it was created by artificial intelligence (AI) or other technology that “merges, replace or superimposes content on to a video, making it appear to be authentic.” This decision was made to signal Facebook’s intent to stop misinformation ahead of the 2020 presidential election.
Facebook stated in a blog that, “[m]anipulations can be made through simple technology like Photoshop or through sophisticated tools that use artificial intelligence or ‘deep learning’ techniques to create videos that distort reality – usually called ‘deepfakes.’” These manipulations are often hard for users to identify.
To address this issue, Facebook’s plan’s elements include: “investigating AI-generated content and deceptive behaviors like fake accounts, to partnering with academia, government and industry to exposing people behind these efforts.” Facebook intends to identify and remove only this type of information from its platform.
Facebook will also remove content if it violates community guidelines; additionally, content that does not meet this new policy restriction can be flagged for review by a third-party fact-checker. Facebook stated, “[i]f we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
A controversial and considerably edited video circulated Facebook of House Speaker Nancy Pelosi that appears to depict her drunk, incomprehensibly slurring and tripping over her words. The video was slowed down to create this effect. Facebook stated, the “doctored video of Speaker Pelosi does not meet the standards of this policy and would not be removed. Only videos generated by artificial intelligence to depict people saying fictional things will be taken down.” In response, Drew Hammill, a spokesperson for Pelosi, stated that Facebook, “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.” While the video was identified as false it was not taken down because the company does not “have a policy that stipulates that the information you post on Facebook must be true,” a Facebook spokesperson said.
In September, Facebook launched the Deep Fake Detection Challenge, to help create more research and tools to identify this content, such as a course, that helps news outlets identify manipulated media. However, these resources may not be enough; its new policy has been criticized as short-sighted and narrow.
“These misleading videos were created using low-tech methods and did not rely on AI-based techniques, but were at least as misleading as a deep-fake video of a leader purporting to say something that they didn’t,” Hany Farid, a digital forensic expert from the University of California at Berkeley whose lab works with Facebook on deepfakes, said. “Why focus only on deep-fakes and not the broader issue of intentionally misleading videos?” Another critic of the policy responded, “[t]his change comes up short of meaningful progress and only affects one small area of the larger disinformation problem,” Bob Lord, Democratic National Committee Chief Security Officer, said.
Facebook has been scrutinized for its stance on political ads and fake news. Facebook has stated that it will not fact-check ads on its platform. This new policy is a mild departure from Facebook’s stance. Facebook did not identify how it would determine if the content was a parody or not, thus determining if it would be removed or not. Facebook’s policy also does not address how its platform spreads or is used to spread deepfakes, rather, it addresses how the deepfake content is created.