BRUSSELS — As of December 1st, the European Union’s “AI Transparency and Authenticity Act” (ATAA) has come into full effect. The legislation, passed earlier this year, mandates that all AI-generated content—text, image, video, and audio—must carry an invisible, tamper-proof cryptographic watermark, as well as a visible label for consumers.
Combating the Deepfake Era The legislation comes in response to the chaos of the last 18 months, where high-profile deepfakes disrupted elections and financial markets globally. “Democracy cannot survive if citizens cannot agree on what is real,” said the EU Commissioner for Digital Policy. “The ATAA ensures that every user has a ‘Right to Reality’—the right to know if they are interacting with a human or a machine.”
The “Verified Human” Standard The law forces social media platforms to fundamentally alter their algorithms. Platforms operating in the EU must now prioritize “Verified Human” content in news feeds, demoting unlabelled synthetic media. Major Generative AI providers face fines of up to 7% of global turnover if their models allow users to generate non-watermarked content. While the law only applies to the EU, the “Brussels Effect” is already visible, with tech giants rolling out these standards globally to avoid maintaining two separate systems.