Risk in the Digital Services Act and AI Act: implications for media freedom, pluralism, and disinformation

By Rosa Maria Torraco

In recent years, the European Union (EU) has taken major steps to shape the future of digital regulation. At the heart of this regulatory change there are two key pieces of legislation: the Artificial Intelligence Act (AIA) and the Digital Services Act Package. In the following analysis, I will examine the definition and operationalisation of systemic risk by the Digital Services Act (DSA) and AIA, with an emphasis on the implications for media freedom, pluralism, and disinformation regulation.

Systemic risk and the DSA: platform accountability in the information ecosystem 

Under the DSA, Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), defined as services with more than 45 million monthly active users in the EU, must fulfil a range of enhanced responsibilities, including conducting systemic risk assessment. Contrary to the AIA, the DSA does not define “systemic risk” expressly. However, they are generally considered to be risks that pervasively impact society at large (Correia de Carvalho, 2025; Kaminski, 2023).

Furthermore, Art. 34(1) DSA categorises systemic risks into four areas: 

  1. the dissemination of illegal content; 
  2. adverse effects on fundamental rights, including freedom of expression and media freedom; 
  3. impacts on civic discourse, electoral integrity and public security;  
  4. risks related to gender-based violence, public health and well-being, and the protection of minors.

Additionally, the DSA requires the VLOP and VLOSE providers to assess how the design features in their core services, such as algorithmic recommender and content moderation systems, promote those risks (Art. 34(2)). Recital 79 is also in line with this perspective as it highlights that platform design, especially if ad-incentivised, can amplify social harms. Recital 84 further expands on this by calling attention to not just illegal content, but also to legal content that has the potential to be harmful.Going a step beyond, Article 35 DSA requires platforms to implement reasonable, proportionate, and effective measures of mitigation against the identified risks. They must also be aligned with implications for users’ fundamental rights, thus reflecting a shift towards an active and preventive digital governance paradigm. These conditions form part of a broader regulatory structure including independent yearly audits (Art. 37) and the choice of compliance officers tasked with overseeing the daily management of systemic risks (Art. 41). These measures demonstrate that the EU is moving towards a risk-based regulatory strategy that connects platform responsibility with system design and internal governance arrangements (European Data Protection Supervisor, 2021; Eder, 2023). Indeed, the DSA is no longer just about content, but also about the infrastructure enabling its dissemination. From algorithmic curation to personalised ads, the DSA recognises that these opaque systems shape what we see, what we think, and how our democracies function (Zanfir-Fortuna & Rovilos, 2023). 

Media pluralism, media freedom, and disinformation in the systemic risk framework of DSAThe DSA places media freedom, pluralism, and disinformation in the broader framework of systemic risk management. Art. 34(1)(b) requires VLOPs and VLOSEs to include the actual or potential impact of their activities on media freedom and pluralism in their annual systemic risk reports. Recital 84 provides interpretive guidance to this requirement by highlighting the duty of platforms to consider how specific aspects, such as algorithmic amplification and curation, may affect visibility of content, restrict access to diverse perspectives, or otherwise constrain information diversity.

At the same time, Art. 34(1)(c) addresses disinformation-related systemic risks, focusing on its potential consequences for electoral integrity and public security. Once again, interpretative guidance can be found in the preamble. Indeed, while disinformation does not constitute illegal content per se, Recital 70 underscores its threat to democratic institutions and public order. 

To mitigate the risks identified under Arts. 34(1)(b) and 34(1)(d) DSA, Art. 35 outlines a range of possible interventions, including increased transparency in digital advertising, and enhanced collaboration with independent researchers and trusted flaggers. Among the most important tools in this context, there is the development and application of Codes of Conduct (Arts. 35(1)(h) and 45). These codes promote cooperation between platforms, authorities, and civil society players. An example is the Code of Practice on Disinformation, which was previously a voluntary initiative but has recently been institutionalised under the DSA as a Code of Conduct (Brogi & De Gregorio, 2024).

General-Purpose AI and systemic risk: regulatory approaches under the AI Act

The AIA is not intended as a comprehensive framework for platform governance and does not expressly regulate disinformation or media pluralism. While the protection of fundamental rights is a priority of the AIA, it tackles these questions via a product safety–oriented approach, focusing on the classification, oversight, and regulation of AI systems based on their level of risk (Almada & Petit, 2023). Consequently, its application extends across multiple sectors rather than being explicitly directed at online platforms.

Nevertheless, the AIA explicitly prohibits AI practices which are considered dangerous for democratic integrity and fundamental rights, such as subliminal manipulation, the exploitation of vulnerable populations, and public-sector social scoring (Lázaro Cabrera, 2024). Although not targeted at media ecosystems or content moderation specifically, these provisions align with the AIA’s general objective of safeguarding social cohesion and public trust from AI applications that are disruptive.

A key area of overlap between the AIA and the DSA is set out in Recital 118, which acknowledges that AI regulation and platform governance will intersect when AI systems are integrated into VLOPs or VLOSEs. In such cases, compliance with the risk management requirements in the DSA is assumed to satisfy the AIA’s standards, unless additional risks are present that fall outside the scope of the DSA. It is hence necessary to understand what the AIA considers to be systemic risk, and why additional obligations may be imposed.

Unlike the DSA, which does not explicitly define systemic risk, the AIA offers a definition in Article 3(65). Systemic risk refers to “a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”.

In contrast to the DSA, where systemic risk requirements are laid down for all designated VLOPs and VLOSEs uniformly, the AIA designates as systemically-risky only those general-purpose AI models meeting specific requirements. Article 51 sets out the conditions under which a general-purpose AI model would be considered systemically risky, either through demonstrated high-impact capabilities, as assessed by technical benchmarks (Art. 51(1)(a)), or by formal classification from the European Commission, potentially following recommendations from a scientific advisory panel (Article 51(1)(b)). Under Art. 55 AIA, general-purpose AI providers whose models pose systemic risk are subject to further compliance obligations beyond those in Arts. 53 and 54, which apply to all general-purpose AI models. These include reporting mechanisms for serious incidents, implementing risk assessment and mitigation measures, and maintaining robust cybersecurity protocols. 

Providing an additional layer of support for alignment between the AIA and the DSA, Recital 136 specifically recognises AI-generated disinformation as a particular and rising threat to electoral processes and civic discourse. To that end, the AIA imposes certain requirements on the providers and deployers of certain AI systems, such as general-purpose AI systems, regardless of whether they trigger the threshold of systemic risk. Under Art. 50(2), general-purpose AI systems that generate synthetic material such as text or images must make such outputs machine-readable and marked as artificial. Such requirements for content transparency are essential in facilitating the DSA’s overall enforcement goals, particularly where VLOPs and VLOSEs are involved, by fostering accountability as well as countering the risk that is posed by AI-based disinformation as well as content manipulation.

Risk-based approach under the AIA

Besides what has been discussed above, the AIA categorises AI systems into four levels of risk: minimal, limited, high, and unacceptable. Indeed, Recitals 26 and 27 explain that AIA obligations should be proportionate to risk. Based on this, each level comes with corresponding regulatory requirements. 

Figure 1- Retrieved from: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai 

Minimal-risk systems are not subject to obligations but may follow voluntary codes of conduct. Looking at the media environment, in this category there are the majority of social media recommender systems, although they were initially deemed as high-risk by the European Parliament (European Commission, 2023). On the other hand, limited-risk systems must meet transparency requirements under Art. 50 AIA. These requirements include the necessity of informing users when they are interacting with an AI, a measure that is particularly relevant in the field of media and digital communication. For instance, AI-generated news or deepfakes must clearly be labelled in order to avoid misleading the recipients. 

High-risk AI systems referred to in Article 6 and Annex III, AIA encompass applications in biometric identification, education, employment, and law enforcement, among others. Importantly, this category also includes certain AI systems used for content moderation (Gosztonyi et al., 2025) and AI systems intended to influence elections or referenda, or to affect individual voting behaviour. Finally, under Article 5, the AIA prohibits certain AI practices, which include deploying subliminal techniques and exploiting users’ vulnerabilities (O’Grady, 2025). This provision deserves particular attention in the context of online platforms as it targets not only Large Language Models (LLMs) but potentially also recommender systems, whose design can influence users’ behaviour in subtle and potentially harmful ways. Specifically, when LLMs are used as search engines, as demonstrated also by the recent integration of Gemini in Google’s search engine, they raise questions not only about transparency, but also in relation to the line between retrieving information and adopting subliminal techniques. This risk of subliminal manipulation is especially relevant in contexts that can affect democratic processes, such as elections. 

All these AIA provisions contribute to the DSA’s objective of restricting platform designs and technologies that compromise the users’ ability to make autonomous and informed decisions (Correia de Carvalho, 2025). 

Final remarks: media pluralism and freedom of expression: shared commitments, uneven convergence

In sum, both the DSA and AIA reflect the EU’s commitment to safeguarding media pluralism and freedom of expression in the digital age. Nevertheless, their regulatory regimes only partially converge (Ferrario & Fabbri, 2025). The DSA enshrines media pluralism as a primary concern, turning platforms into infrastructural actors in the distribution of information. On the other hand, the AIA regulates AI at the level of design and deployment, identifying systemic risks as one dimension of harm associated with general-purpose AI models. However, the AIA does not restrict itself to systemic risk. It also sets out a broader risk-based framework that governs high-risk AI applications, as well as outright prohibitions on manipulative or exploitative AI practices.

The frameworks, therefore, converge around the concept of systemic risk but diverge in how they define and operationalise it. The DSA leaves the term undefined yet explicitly refers to harms for media diversity and public discourse, making systemic risk a tool for interrogating platform functionality. The AIA defines systemic risk in precise technical terms but does not explicitly apply the concept to the media environment. Its obligations apply upstream to developers and providers and remain largely detached from the downstream effects of content dissemination.

What emerges is a multi-layered but fragmented governance regime. The DSA addresses the platforms that organise public discussion, while the AIA regulates the underlying technologies that produce content. Still, the AIA will have to be carefully enforced in order to address media-specific vulnerabilities. Otherwise, the combination of the AIA and DSA will be largely theoretical and therefore insufficient to account for the complex, systemic character of information harms that occur in AI-enabled environments. 

To bridge these gaps, future Codes of Practice can play a key role in clarifying the interaction between the two regulatory frameworks. Furthermore, the Commission could reconsider the current risk classification, for instance by designating recommender systems as high-risk under the AIA, consequently exposing them to stricter regulatory requirements.

Finally, further research could delve deeper into the interplay between the AIA risk classification, particularly high-risk systems, and general-purpose AI models posing systemic risk. 

Notes

[1] According to Art. 3(63) AIA, general-purpose AI model “means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”. Furthermore, Art. 3(66) AIA defines a general purpose AI system as “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”.

References

Almada, M., & Petit, N. (2023). The EU AI Act: A medley of product safety and fundamental rights? Robert Schuman Centre for Advanced Studies Research Paper No. 2023/59. 

Brogi, E., & De Gregorio, G. (2024). From the code of practice to the code of conduct? Navigating the future challenges of disinformation regulation. Journal of Media Law, 16(1), 38–46. 

Correia de Carvalho, M. (2025). It will be what we want it to be: Sociotechnical and contested systemic risk at the core of the EU’s regulation of platforms’ AI systems. Journal of Intellectual Property, Information Technology and E-Commerce Law, 16(1).  

Eder, N. (2023). Making systemic risk assessments work: How the DSA creates a virtuous loop to address the societal harms of content moderation. Forthcoming in German Law Journal. SSRN.  

European Commission. (n.d.). Digital principles. Shaping Europe’s digital future. Retrieved April 22, 2025, from ​

European Commission. (2025, April 22). Commission seeks input to clarify rules for general-purpose AI models. European Commission. 

European Commission. (2023, December 8). Commission welcomes political agreement on Artificial Intelligence Act [Press release]. 

European Data Protection Supervisor. (2021, February 10). Opinion 1/2021 on the Proposal for a Digital Services Act

European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (Artificial Intelligence Act). Official Journal of the European Union, L 202, 1–144. 

European Union. (2022). Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). Official Journal of the European Union, L 277, 1–102. 

Future of Life Institute. (2024, August 1). Implementation timeline. EU Artificial Intelligence Act. 

Gorwa, R., & Garton Ash, T. (2023). Systemic risks in the Digital Services Act: Challenges and opportunities. Centre on Regulation in Europe (CERRE). 

Gosztonyi, G., Gyetván, D., & Kovács, A. (2025). Theory and practice of social media’s content moderation by artificial intelligence in light of European Union’s AI Act and Digital Services Act. European Journal of Law and Political Sciences, 4(1). 

Kaminski, M. E. (2023). The developing law of AI: A turn to risk regulation. The Digital Social Contract: A Lawfare Paper Series. University of Colorado Law Legal Studies Research Paper No. 24-5. 

Lazaro Cabrera, L. (2024). EU AI Act Brief – Pt. 3, Freedom of Expression. Center for Democracy & Technology. ​

O’Grady, C. (2025, April 30). ‘Unethical’ AI research on Reddit under fire. Science.

Loi, M., Ferrario, A., & Fabbri, M. (2025). Regulating the undefined: Addressing systemic risks in the Digital Services Act (with an appendix on the AI Act). SSRN. https://doi.org/10.2139/ssrn.5116070 Zanfir-Fortuna, G., & Rovilos, V. (2023). EU’s Digital Services Act Just Became Applicable: Outlining Ten Key Areas of Interplay with the GDPR. Future of Privacy Forum.
The post Risk in the Digital Services Act and AI Act: implications for media freedom, pluralism, and disinformation appeared first on Centre for Media Pluralism and Freedom.