Under its new policy, videos that have both been edited by an AI and could be mistaken as real. That could include replacing someone’s face, altering a speech with AI, and more. Interestingly, though, Facebook says this “does not to extend to content…that has been edited solely to omit or change the order of words”. The most notable example of this came last year with house speaker Nancy Pelosi. A clip shared widely on Facebook seemed to display her muddling and repeating words. The clip was later shared by President Trump on Twitter with the caption “Pelosi stammers through news conference.”
Reducing the Impact
However, videos that don’t meet standards for removal are still eligible for review by one of Facebook’s third-party fact-checkers. “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad,” explained Monika Bickert, vice president of global policy management, Facebook. “And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.” Facebook also pointed to its Deep Fake Detection Challenge, which provides grants to those trying to find automatic detection methods. It’s also partnering with Reuters to provide a free online training source for journalists. Unfortunately, all of this is likely to add to the confusion moderators feel about its ever-changing policies. Some outside contractors have previously noted that they’re expected to hit high accuracy targets despite policies changing almost daily.