For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Can you post a deepfake of a politician on Instagram? Or use an artist’s music in your video? Sometimes yes, sometimes no. According to researcher João Pedro Quintais, clearer EU rules for AI-driven content moderation are urgently needed. ‘If we don’t regulate well, what we can post is left to companies that optimise for profit’, he warns. Quintais received a Vidi grant to research how the EU addresses the moderation of problematic content on digital platforms.

Are you concerned about the uprising of AI?

‘I’m not super worried. Only the speed of technology implementation worries me a bit. Technology like ChatGPT is mostly dictated by 10 to 20 Americans in Silicon Valley, who don’t necessarily share the same values as the EU. We are essentially the testers of their product while they are optimising it. For instance, we are creating AI companions that replace psychologists without first testing their effects on mental health. Legislators have not yet been able to deal with that problem. A lot of harm can happen in the meantime. There is an economic pressure to get on board with AI because we don’t want to fall behind. At the same time, we need to counteract the negative effects. In my project, I specifically look at the effect AI has on freedom of expression.’

In what way could freedom of expression be impacted?

‘This project focuses on AI content that is illegal, like child sexual abuse material or copyright offences. But it also looks at content that doesn’t qualify as illegal, but that might be prohibited by a platform. Disinformation or political deepfakes, for example. This type of speech isn’t illegal in most cases, but could be considered “toxic” by the company. Outside of content that is illegal, Platforms currently have the power to decide what type of material they allow on their services.’

Copyright: Stockfoto ID:1173494845
Platforms currently have the power to decide what type of material they allow on their services.

Why is it harmful to leave these decisions up to platforms?

‘Because of the size and reach of these providers, the decisions made by companies like Meta and X may impact the fundamental rights of their users, including freedom of expression. For that reason, when considering how to regulate these providers, we should ask: What type of speech do we want to protect? And what types of online content should we allow? If we don’t, we basically leave it up to the providers, which tend to respond primarily to economic incentives. We can see this playing out in the United States. If President Trump says he wants platforms to moderate less when it comes to conservative speech, platforms respond to these incentives. Companies simply want to stay out of trouble and earn money. From my perspective, we should steer AI and platform regulation in the EU and its implementation in accordance with our values.’

Don’t platforms already face some restrictions?

‘They do, but technology develops quickly and not always in ways that are adequately foreseen by regulators. For instance, while the EU legislature was discussing the AI Act, we witnessed the rise and widespread adoption of ChatGPT. As originally foreseen, the AI Act proposal did not properly capture generative AI tools. The Digital Services Act (DSA) regulates platforms like X, Facebook, Instagram and even Zalando. It also regulates search engines like Google. But the DSA falls short of adequately regulating AI models embedded in platforms and AI-generated content. One of the legal gaps that lies at the heart of this project is precisely this legal uncertainty and overlaps at the intersection of AI and platform regulation. This is already playing out at the EU level, since the EU Commission will likely designate ChatGPT in 2026 as a so-called “Very Large Online Search Engine”, meaning it will be partly subject to the DSA. In essence, the legal and normative parts of this Vidi project are about figuring out how EU law covers these new technologies, and how it should regulate them.’

CV

João Pedro Quintais is an Associate Professor at the Institute for Information Law (IViR), Amsterdam Law School. He previously received a Veni grant for the project “Responsible Algorithms” and more recently obtained a Vidi grant for his project “Generative AI Content Moderation: Regulation for Fundamental Rights”.

What do these legal gaps result in?

‘There are multiple potential problems, but one of the major current discussions relates to copyright infringement. It’s probably the hottest legal topic in this area. For instance, what happens when you generate content on ChatGPT that is similar or identical to copyright-protected content, like a Marvel character? In some cases, the generative AI tool allows you to do this. In others, it contains guardrails that technically prevent this content from being generated, while informing users that generating such content is contrary to the services' terms of use. The same may happen for other types of illegal content, like child sexual abuse material. In many instances, AI providers and platforms try to put guardrails in place to prevent the generation of these materials. But oftentimes these guardrails are weak and can be easily bypassed by using different prompts, enabling you to still generate this type of problematic content.’ 

How will your project address these issues?

'By bridging gaps between platform and AI law, I aim to guide future regulation through a principled, rights-respecting framework. Step one is figuring out how the AI Act and the Digital Services Act work together, and how they interact with sectoral rules on disinformation and copyright. This will be combined with empirical research on how AI systems, especially those embedded on platforms, regulate AI-generated content in practice. This includes looking at their moderation policies. Combining these aspects allows us to paint a complete picture of the practice of generative AI content moderation. It also allows us to understand what providers promise to do and whether they keep their promises and fulfil their legal obligations.  Without that knowledge, we risk that current providers engage in little more than “a compliance theatre” and escape real enforcement of their legal obligations. Finally, based on this legal and empirical research, I hope to develop concrete policy recommendations and an interdisciplinary agenda for regulating GenAI content in the EU.

Copyright: Vrij
We’ve had a tsunami of digital regulation in the EU in the past decade, but we are missing foundational clarity.

What type of regulation do you think is needed?

‘There is a big discussion about simplifying regulation. If simplification creates more clarity, it’s highly beneficial. But if simplification means simply deregulation, then it is a bad idea. We’ve had a tsunami of digital regulation in the EU in the past decade, but we are missing foundational clarity. Platforms with embedded AI tools are increasingly important in our everyday lives but subject to regulatory frameworks that are far from perfectly aligned or clear. That grey area leads to normative questions for regulators, including about what type of speech is and should be allowed on those services. I think we need to take a step back and define what we really want the law to accomplish in this area for the good of society, guided by EU values and fundamental rights.’