September, Friday 20, 2024

Demand for Meta to Add Labels on Posts Due to Spread of Fake Biden Video


tFfgZlaDT3PmrYN.png

The Oversight Board, an independent body that reviews Facebook's content moderation practices, has recommended that the platform label fake posts instead of removing them. The board praised Meta, the owner of Facebook, for not removing a fake video of US President Joe Biden, stating that it did not violate the manipulated media policy. However, the board criticized the policy as "incoherent" and called for it to be expanded in preparation for the upcoming election year. Meta stated that it is currently reviewing the guidance and will respond publicly within 60 days. The Oversight Board emphasized the need for more labeling of fake content on Facebook, especially when it cannot be removed due to certain policy violations. They believe that this approach could reduce dependence on third-party fact-checkers, enforce the manipulated media policy more effectively, and inform users about fake or altered content. The board also expressed concern that users may not be adequately informed about why certain content has been demoted or removed, as well as how to appeal such decisions. In its first year of accepting appeals, Meta's board received over a million appeals regarding removed posts from Facebook and Instagram. The specific video involving President Biden was edited to make it appear as though he was behaving inappropriately with his granddaughter. Since the video was not manipulated using artificial intelligence and depicted a real event that did not occur, it did not violate Meta's manipulated media policy, and therefore, it was not removed. The Oversight Board's co-chair, Michael McConnell, criticized the current policy, stating that it is inconsistent and does not address fake content adequately. He highlighted the policy's focus on video manipulation using AI, which ignores other forms of fake content, such as audio deep fakes, which have become a potent form of electoral disinformation. McConnell emphasized the importance of updating the policy to adapt to evolving technologies and methods of deception while protecting political speech, even if it includes disputed or false claims that are not demonstrably harmful. Sam Gregory, the executive director of human rights organization Witness, suggested that Meta should have an adaptive policy that addresses not only AI-generated or altered material but also "cheap fakes" and satirical content that is not designed to be misleading. He noted that Meta's current manipulated media policy evaluates whether content would mislead an average person but acknowledged the need for ongoing adjustments as AI technology and usage become more sophisticated. Gregory also expressed concerns about the effectiveness of automatically labeling content manipulated using emerging AI tools, emphasizing the importance of contextual knowledge in explaining manipulation. He pointed out that countries in the Global Majority world may face disadvantages due to poor-quality automated labeling and a lack of resources for trust and safety, content moderation teams, and independent journalism and fact-checking.