Loading stock data...

Meta’s New AI Deepfake Playbook Adds Labels But Reduces Takedowns

Following criticism from its Oversight Board, Meta has announced changes to its rules on AI-generated content and manipulated media. Starting next month, the company will label a wider range of such content, including deepfakes, with a ‘Made with AI’ badge.

Background

The rise of generative AI tools has led to an increase in synthetic content being shared online. This has raised concerns about the spread of misinformation and manipulated media. In response, Meta has been working to develop policies that balance the need to protect users from fake content with the need to allow for creative expression.

New Policy

Under the new policy, Meta will label AI-generated content, including deepfakes, with a ‘Made with AI’ badge. This will provide users with more information about the content and help them assess its authenticity. The company has also announced that it will not remove manipulated content unless it violates other policies, such as voter interference or bullying.

Expanded Labelling

The expanded policy will cover a broader range of content in addition to the manipulated content recommended by the Oversight Board. If Meta determines that digitally-created or altered images, video, or audio create a particularly high risk of materially deceiving the public on a matter of importance, it may add a more prominent label.

Third-Party Fact-Checkers

Meta is working with a network of nearly 100 independent fact-checkers to help identify risks related to manipulated content. These external entities will continue to review false and misleading AI-generated content, and when they rate content as ‘False or Altered,’ Meta will respond by applying algorithm changes that reduce the content’s reach.

Increased Workload for Fact-Checkers

The boom in generative AI tools is likely to lead to an increase in synthetic content, putting pressure on fact-checkers to review more material. However, the expanded policy and third-party collaboration are designed to provide users with more information and context about the content they see online.

EU Publishes Election Security Guidance

The European Union has published election security guidance for social media giants, including Meta, and others in scope of the DS Act. The guidelines aim to help platforms prevent foreign interference in elections and maintain transparency around advertising and sponsored content.

Meta’s Response to Criticism

In response to criticism from its Oversight Board, Meta has announced changes to its policies on AI-generated content and manipulated media. The company has committed to providing users with more information about the content they see online and working with third-party fact-checkers to identify and address risks related to synthetic content.

Related News

  • Tesla to split $100M award for electric truck charging corridor in Illinois
  • Bluesky is getting its own photo-sharing app, Flashes
  • UnitedHealth hid its Change Healthcare data breach notice for months
  • Google’s NotebookLM had to teach its AI podcast hosts not to act annoyed at humans

Subscribe to TechCrunch Daily News

Stay up-to-date with the latest news in tech by subscribing to TechCrunch’s daily newsletter. Every weekday and Sunday, you can get the best of TechCrunch’s coverage delivered straight to your inbox.

TechCrunch AI

Get the latest news on AI from TechCrunch’s experts. Subscribe now and stay ahead of the curve in the fast-moving field of artificial intelligence.

Startups Weekly

Stay up-to-date with the latest developments in startups by subscribing to TechCrunch’s Startups Weekly newsletter.