Deepfake, Deep Trouble: The European AI Act and the Fight Against AI-Generated Misinformation

By Mauro Fragale and Valentina Grilli*

The Deepfake Dilemma: Legal Implications of AI-Generated Content

In July, X (formerly Twitter) CEO Elon Musk shared a parody Kamala Harris campaign, in which the candidate for the United States (US) presidency – or rather, an AI-manipulated version of her voice – exposed herself as an incompetent, token candidate.

The incident intensified an ongoing global debate about AI-generated misinformation, particularly as it affects important elections. Just a year ago, European Union (EU) institutions voiced their worries for this increasingly diffused phenomenon: in the wake of Russian efforts to undermine public opinion and influence citizens into supporting Russia in the Ukrainian conflict, governments and non-State actors were urged to regulate their stance on AI-generated content to fact-check content, combat fake news, and curb misinformation.

Deepfakes – AI-generated content that manipulates images, audio, and video – are becoming increasingly prevalent online. Earlier this year, AI-generated cover songs by popular artists sparked debates on their legitimacy, and now deepfakes are being used to sway elections and spread misinformation.

Despite the surge in AI-generated content and the clear challenges it poses when left unbridled, lawmakers have struggled to keep pace with its rapid development. National governments have yet to take decisive steps in the right direction, regulating AI with the aim of avoiding its misuse.

In response, the EU introduced the AI Act, a landmark piece of legislation that came into effect in August 2024 to address the risks associated with AI-driven misinformation. While an innovative and groundbreaking legal instrument, it was still met with criticism since its publication, as it tries to regulate an incredibly powerful and complex phenomenon through mere technical requirements and rules.

EU Efforts to Regulate AI and Combat Fake News

Social networks have been quicker than governments to address deepfake regulation, partly due to frequent accusations of hosting and spreading misinformation.

In September 2023, TikTok announced that it had developed a tool to detect and disclose AI-generated content posted by creators on the platform. In February 2024, Meta launched technologies to detect and label AI-generated content, implementing such changes across all its platforms (Facebook, Instagram, Threads). These tools are aimed at making sure that users know when AI is involved by adding visible markers over AI-generated pictures and clips, as well as invisible watermarks and metadata embedded within image and video files.

Then, the European Union intervened in the matter by enacting the European AI Act, an innovative legislative instrument which entered into force in August 2024.

The AI Act, more correctly referred to as Regulation (EU) 2024/1689, is “the first-ever legal framework on AI”, laying down harmonized rules on Artificial Intelligence in order to address its risks. This legal instrument, as the official page states, “provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI”, with the aim of “ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The legislation classifies AI systems according to the risks their use entails and imposes specific transparency obligations on providers and developers to reduce those risks to a minimum. For example, AI technology used in critical infrastructures, such as transport, is considered high-risk, being capable of putting the life and health of citizens at risk, and thus is subject to stricter regulation; conversely, AI-enabled spam filters and video games are classified as minimal risk, and as such they are subject to little to no regulation.

However, the AI Act also imposes obligations on deployers, or professional users of AI: Article 26 of the AI Act lays out specific rules for deployers of AI – that is, users in a professional capacity. In particular, such obligations cover, among others: training and guidance of those overseeing the correct functioning of AI tools, to ensure that they have the necessary competence and authority to carry out that role; suspension of operation if the system does not perform as intended; and ensuring compliance with other EU and national laws, in particular with the GDPR, the ePrivacy Directive (on the use of cookies and digital marketing), the EU Data Act, and relevant cybersecurity laws.

Article 99 of the AI Act outlines penalties for non-compliance of developers and deployers: failure to abide by the rules entails a financial penalty of 35 million euros or, if higher, 7 percent of the worldwide annual turnover for the preceding financial year. Thus, enforcement of obligations under the AI Act is ensured by administrative fines.

As European governments work to enforce the Regulation, tech companies are rapidly updating their AI policies: whereas TikTok and Meta had already applied specific rules to curb the misuse of AI before the entry into force of the AI Act, more digital platforms are following suit. For example, last month, Google announced that it would label AI-generated images appearing in the results of its search engine, also anticipating that the system will expand to detect AI-generated content in advertisement.

Transparency is increasing, with platforms ensuring users are informed whenever they encounter AI-enhanced imagery appearing on their screens, since such media can be a powerful tool for persuasion and deception.

Is AI Content Labeling Enough to Curb the Risks of Deepfakes?

Labeling AI-generated content improves transparency and trust, helping users recognize machine-generated material and allowing individuals to make informed decisions about the content they consume.

Article 50 of the AI Act, entitled “Transparency Obligations for Providers and Deployers of Certain AI Systems,” acknowledges that because some AI systems are designed to interact with people or generate content, they carry risks of impersonation or deception even if they are not classified as high-risk. To address this, the Act stipulates transparency requirements, without affecting existing high-risk AI regulations: first and foremost, individuals must be informed when interacting with AI unless it’s clear from the context; additionally, individuals must be notified when AI systems analyze their biometric data to detect emotions and intentions or assign them to specific categories.

Although some provisions of the AI Act have been fully applicable since August 2024, the date set for full enforcement is 2 August 2026. Some worry about this delay in requiring full compliance given the rapid pace of AI development and the urgent need to combat issues like deepfakes and misinformation. Thus, a “voluntary compliance” process is being carried out on two fronts: the Commission is promoting the AI Pact, seeking the industry’s voluntary commitment to anticipate the AI Act and to start implementing its requirements ahead of the legal deadline; meanwhile, many platforms are developing labeling systems for AI-generated content to promote transparency and build trust with users.

However, enforcing transparency across global platforms presents significant challenges.

First, EU countries have varying regulatory frameworks, making it difficult to create a one-size-fits-all approach. In addition, the AI Act does not specify which regulator should act as a surveillance authority, so EU Member States must adopt their own national penalty frameworks, leading to differences in enforcement across States. For instance, Spain created an ad hoc regulatory body to observe AI development and Italy has approved a draft law on artificial intelligence. Conversely, countries such as France and Germany do not yet have a dedicated law or body regulating artificial intelligence.

Furthermore, AI technology is rapidly evolving, and staying ahead of new forms of content generation and manipulation – like deepfakes – requires constant adaptation.

Finally, the transparency obligations set out by the AI Act might conflict with the Digital Services Act (DSA). This 2022 legislation, fully applicable since February 2024, incorporates several obligations for online platforms in terms of liability, appeal mechanisms, systemic risk assessment, and online advertising. Generative AI models blur the lines with the DSA’s categories of intermediary services, posing challenges for the enforcement of the DSA. For instance, Google’s AI Overviews provides the user with the answer they seek without clicking on a link, unlike a “traditional” search engine, and interpersonal communication services like emails and private messaging are generally excluded from DSA hosting regulations, which means that AI chatbots also fall outside these rules.

Consequently, while AI content labeling is a valuable tool in addressing the risks of misinformation and deception, it is not enough on its own: deepfakes – especially sophisticated ones – can still evade detection systems. Effective mitigation requires a combination of advanced detection technologies, stricter regulations, and public awareness campaigns to educate users about the potential dangers of deepfakes and how to recognize them.

Criticism of the AI Act: Are Fines a Sufficient Deterrent?

Article 99 of the AI Act establishes that Member States must lay down rules and enforcement measures, as well as warnings and proportionate penalties for their violation; then, Member States must notify the Commission of these rules and any amendments. Non-compliance with the mandated obligations is punishable by hefty fines that can reach up to 35 million euros or 7 percent of global annual turnover.

While monetary fines may be effective as a deterrent, they have limitations in preventing the spread – and dangerous consequences – of deepfakes: fines may impact larger organizations financially, but are often insufficient to curb smaller actors, individuals, or those operating in jurisdictions with lax enforcement. Moreover, fines merely address violations ex post rather than preventing the initial creation or spread of deepfakes. Proactive measures like preventive detection technology and public education would thus be preferable as ex ante solutions.

Furthermore, financial penalties are particularly weak when dealing with extraterritorial enforcement of the AI Act. The instrument has a broader scope than just the territory of the EU: Article 2 of the AI Act states that the Regulation applies to providers putting into service AI systems on the market of the Union, “irrespective of whether those providers are established or located within the Union or in a third country.” This extension to transboundary cases requires the development of additional enforcement tools. As financial penalties are hard to enforce by EU authorities outside their borders, this may necessitate the use of harmonized criminal law outside of the EU through cooperation with third-country authorities and Mutual Legal Assistance Treaties (MLATs).

In the authors’ view, effectively combating the spread of deepfakes requires a multifaceted approach. Implementing advanced AI detection tools to identify and remove manipulated content early across online platforms must be combined with collaboration between tech companies and government agencies, in order to establish real-time monitoring systems capable of swiftly tracking deepfake proliferation. Public education also plays a critical role in this effort: raising awareness about deepfakes through targeted campaigns can help citizens recognize and question manipulated media, fostering a more skeptical and discerning public mindset regarding digital content. By promoting digital literacy, these campaigns can reduce the overall impact of misinformation.

Conclusion

The AI Act and other legislative interventions can foster heightened vigilance toward AI systems, provide guidelines for their development and employment, and ensure that machine-enhanced interventions are swiftly detected.

It remains questionable that financial penalties are sufficient to effectively deter infringements, especially considering how rapidly AI technologies are evolving. The combination of strong detection technologies, cross-sector partnerships, and an informed public is essential to limit the damage deepfakes can cause, particularly in sensitive contexts like elections where misinformation can influence public opinion and democratic processes.

Ultimately, it is up to users to understand whether they are looking at digitally modified content: analyzing the plausibility of pieces of news, pictures and other media, questioning the sources, and considering the possibility of something being generated by AI are the most important tools individuals possess to avoid deception – at least, as long as AI-generated content is still not too developed to be detectable.


*Mauro Fragale is a Bocconi University Law School Graduate, currently carrying out his legal traineeship in Modena focusing on civil and commercial law, privacy, and debt collection. His interests include IP law, Antitrust, and IT & Communication Law, with a particular focus on the recent legal developments at the intersection of technology and the law.
Valentina Grilli, a Bocconi University Law School Graduate, took part in the Willem C. Vis International Commercial Arbitration Moot during her final year of study, which sparked her interest in arbitration. Currently, she serves as a trainee lawyer at a distinguished law firm, specializing in criminal law, and is pursuing a Master in Data Science, Big Data and Artificial Intelligence in Finance.