What is the Digital Services Act (DSA)?
The Digital Services Act is the most important and most ambitious regulation in the world in the field of the protection of the digital space against the spread of illegal content, and the protection of users’ fundamental rights. There is no other legislative act in the world having this level of ambition to regulate social media, online marketplaces, very large online platforms (VLOPs) and very large online search engines (VLOSEs). The rules are designed asymmetrically: Larger intermediary services with significant societal impact (VLOPs and VLOSEs) are subject to stricter rules.
After the Digital Services Act, platforms will not only have to be more transparent, but will also be held accountable for their role in disseminating illegal and harmful content.
Amongst other things, the DSA:
1. Lays down special obligations for online marketplaces in order to combat the online sale of illegal products and services;
2. Introduces measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights;
3. Protects minors online by prohibiting platforms from using targeted advertising based on the use of minors’ personal data as defined in EU law;
4. Imposes certain limits on the presentation of advertising and on the use of sensitive personal data for targeted advertising, including gender, race and religion;
5. Bans misleading interfaces known as ‘dark patterns’, and practices aimed at misleading.
Stricter rules apply for very large online platforms and search engines (VLOPs and VLOSEs), which will have to:
1. Offer users a system for recommending content that is not based on profiling;
2. Analyse the systemic risks they create: Risks related to the dissemination of illegal content, negative effects on fundamental rights, on electoral processes and on gender-based violence or mental health.
In the context of the Russian military invasion in Ukraine, involving grave and widespread violations of the human rights of the Ukrainian people, and the particular impact on the manipulation of online information, the Digital Services Act introduces a crisis response mechanism. This mechanism will make it possible to analyse the impact of the activities of VLOPs and VLOSEs on the crisis, and rapidly decide on proportionate and effective measures to ensure the respect of fundamental rights.
Timeline and consequences of the EU–US clash over the Digital Services Act (DSA)
On Tuesday 23 December 2025, the United States Department of State announced the imposition of visa restrictions on five European individuals whom it accused of engaging in actions that allegedly undermined freedom of expression and targeted U.S. based digital platforms. According to the U.S. authorities, the measures were adopted under existing immigration powers allowing the denial of entry to foreign nationals whose conduct is deemed contrary to U.S. foreign policy interests. The State Department stated that the affected individuals had been involved in activities described as efforts to pressure or coerce American technology companies into suppressing lawful speech.
This is part of a broader policy position of the U.S. administration, which framed the actions as a response to what it characterised as an expanding pattern of foreign regulatory interference in the operation of U.S. based digital services. The U.S. government asserted that certain European regulatory initiatives, including those linked to the European Union’s Digital Services Act, had the practical effect of restricting lawful expression and imposing extraterritorial constraints on American companies.
Among the Europeans whom the U.S. State Department barred from entering the United States, is Thierry Breton, former European Commissioner for Internal Market, responsible during his term for supervising EU digital regulation including the Digital Services Act. He has also been a prominent figure in public discussions about platform regulation.
The imposition of visa restrictions on European officials should not be viewed as an isolated measure. It must be viewed through the lens of the US policy, clearly explained in the announcement on visa restrictions targeting foreign nationals who censor Americans, as we read below:

We read:
"Free speech is among the most cherished rights we enjoy as Americans. This right, legally enshrined in our constitution, has set us apart as a beacon of freedom around the world. Even as we take action to reject censorship at home, we see troubling instances of foreign governments and foreign officials picking up the slack. In some instances, foreign officials have taken flagrant censorship actions against U.S. tech companies and U.S. citizens and residents when they have no authority to do so.
Today, I am announcing a new visa restriction policy that will apply to foreign nationals who are responsible for censorship of protected expression in the United States. It is unacceptable for foreign officials to issue or threaten arrest warrants on U.S. citizens or U.S. residents for social media posts on American platforms while physically present on U.S. soil. It is similarly unacceptable for foreign officials to demand that American tech platforms adopt global content moderation policies or engage in censorship activity that reaches beyond their authority and into the United States. We will not tolerate encroachments upon American sovereignty, especially when such encroachments undermine the exercise of our fundamental right to free speech.
This visa restriction policy is pursuant to Section 212(a)(3)(C) of the Immigration and Nationality Act, which authorizes the Secretary of State to render inadmissible any alien whose entry into the Unites States “would have potentially serious adverse foreign policy consequences for the United States.” Certain family members may also be covered by these restrictions."
On Wednesday, 24 December 2025, the European Commission responded. This took the form of an official statement, reacting to the U.S. decision announced the previous day. In that statement, the European Commission expressed serious concern about the U.S. action and warned that it could take appropriate measures in response if necessary. The Commission characterised the U.S. move as unjustified, and stated that it was assessing the implications carefully.
The European Commission reaffirmed that the individuals targeted by the U.S. visa restrictions had acted within the scope of their professional and institutional responsibilities, and in accordance with European law. It rejected the assertion that their actions amounted to censorship, emphasising instead that the EU’s digital regulatory framework, including the Digital Services Act, is grounded in democratically adopted legislation and aims to ensure transparency, accountability, and the protection of fundamental rights online.
Commission officials underlined that freedom of expression is a core value of the European Union, and stated that EU digital legislation does not authorise political censorship or discrimination against lawful speech. They stressed that the regulation of online platforms falls within the sovereign competence of the European Union and that external pressure or unilateral measures against European officials were unacceptable.
No immediate retaliatory measures were announced.
The statement marked the formal indication that the dispute evolves beyond a political disagreement, into a broader diplomatic escalation.
But how did we get there?
Phase 1 (2022–2024). The Digital Services Act was formally adopted on 19 October 2022. It was published in the Official Journal of the European Union on 27 October 2022.
On 25 August 2023 the DSA began applying to entities designated as Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), following their formal designation by the European Commission in April 2023.
On 17 February 2024 the majority of the DSA’s provisions became applicable across the EU.
Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) became subject to mandatory risk assessments (Article 34), mitigation measures (Article 35), algorithmic transparency, and independent audits.
For the first time, the EU exercised extraterritorial regulatory power over U.S. based platforms, through digital governance. This was a structural shift.
Phase 2 (2024). The European Commission starts enforcing the act, and this includes requests for information to designated platforms, formal scrutiny of systemic risk assessments, and preparatory steps toward investigations under Articles 66–72 DSA.
During the second quarter of 2024, the Commission publicly confirmed that it had initiated formal proceedings against several major online platforms, and required detailed documentation on content moderation systems, algorithmic recommender systems, and advertising transparency. Also, the Commission begun assessments of compliance with Articles 34 and 35 (systemic risk and mitigation measures).
From mid 2024 onward, the Commission’s enforcement expanded into ongoing audits and compliance dialogues, preparation for potential imposition of interim measures, and preparation for administrative fines and periodic penalty payments where non compliance is established.
Phase 3 (Late 2024–2025). The regulatory disagreements escalated into geopolitical confrontation.
The U.S. administration accused EU regulators of censoring American speech, and targeting U.S. companies. Visa bans were imposed on EU figures involved in DSA enforcement. Public statements framed the DSA as a threat to free expression.
This marked the first time digital regulation triggered diplomatic retaliation.
Strategic consequences (opinion, legal intelligence).
As regulatory systems diverge, platforms will be forced to deliver different content depending on the user’s location. In practice, this means:
1. EU users will see content filtered, labelled, ranked, or restricted in accordance with EU law (DSA obligations, risk mitigation, systemic harm controls).
2. U.S. users will see a broader range of speech protected under the First Amendment, with fewer legally mandated removals.
3. Platforms will apply geo-based governance, (not just geo-blocking). There will be different moderation rules, transparency notices, recommender logic, and enforcement thresholds depending on jurisdiction.
What is lawful and visible in one region may be constrained in another, on the same platform.
This results in fragmented digital experiences, where the internet is jurisdiction specific. The global internet is moving to parallel regulatory realities, shaped by sovereignty and geopolitical differences, not technology.
Previously collaborative areas, including cybersecurity, misinformation response, platform governance, are now politicised. This increases compliance costs, legal uncertainty, and enforcement unpredictability. It is important to understand that:
1. Cybersecurity. For years, cybersecurity cooperation was based on the assumption of shared threat perception. Governments, CERTs, and private companies exchanged threat intelligence to counter ransomware, botnets, and state sponsored attacks. This cooperation depended on trust and technical neutrality.
Today, that trust is eroding. Cybersecurity measures are increasingly framed through a national security and sovereignty lens. Information sharing could be constrained by political considerations, and technical findings can be reinterpreted as strategic accusations.
2. Misinformation and disinformation. Efforts to counter disinformation were originally justified as protecting democratic processes and public safety. Over time, they have become entangled with debates over political bias, censorship, and state influence. As a result, measures once seen as neutral risk mitigation, are now seen as ideological enforcement.
3. Platform governance. Platform governance once revolved around technical standards, content moderation protocols, and due diligence processes. Today, it increasingly signals political alignment.
A platform’s compliance approach may now be interpreted as alignment with European regulatory sovereignty, or alignment with U.S. free speech approach.
When regulatory cooperation erodes, coordination mechanisms weaken. Shared early warning systems, joint enforcement initiatives, and trust based information exchanges become harder to sustain. This increases fragmentation, reduces resilience against malicious actors, and raises systemic risk across the digital ecosystem.
What is emerging is not simply regulatory divergence, but a competition of governance models. Cybersecurity, content moderation, and platform accountability are no longer neutral technical fields, they become arenas in which broader geopolitical values are contested.
Platform regulation vs. First Amendment.
The U.S. State Department’s decision to impose visa restrictions on European individuals is extraordinary for several reasons:
a. It is targeted, not symbolic.
b. It frames regulatory conduct as a free speech challenge, not a trade dispute.
c. It explicitly links EU platform regulation with alleged suppression of lawful speech on U.S. based platforms.
The EU’s emerging digital fairness agenda (including the Digital Fairness Act) is described in consumer protection and market regulation terms, targeting dark patterns and addictive design, not speech.
The U.S. political and legal narrative, treats any state driven pressure that changes platform curation, ranking, reach, or removal decisions as regulation of speech, implicating the First Amendment.

EU position: Democratic legitimacy allows regulation of platforms to protect citizens, elections, and societal cohesion.
U.S. position: Any state action that pressures platforms to remove or deprioritize lawful speech undermines constitutional principles.
Why the U.S. reaction was not routed through trade bodies or WTO mechanisms, but through visa restrictions under foreign policy?
The U.S. sees the DSA, the DMA, and the Digital Fairness Act (DFA) that follows, not as regulations, but as strategic interference.
From a U.S. constitutional and strategic perspective, the EU digital regulation:
a. Shapes speech outcomes on U.S. based platforms.
b. Imposes normative European policy on global platforms.
c. Creates extraterritorial compliance pressure via fines and access restrictions.
d. Blurs the line between content moderation and state sponsored narrative control.
This explains why the U.S. reaction was not routed through trade bodies or WTO mechanisms.
The use of individual sanctions indicates a move toward personal deterrence, and a message to regulators worldwide that they may face personal consequences. This is highly unusual in transatlantic relations, and highly suggests the U.S. sees certain regulatory actions as equivalent to state backed coercive influence operations.
In Moody v. NetChoice, LLC (U.S. Supreme Court, 2024), the Supreme Court made one point unmistakably clear: Content moderation, ranking, and curation can constitute expressive activity protected by the First Amendment.
It is clear that the U.S. Supreme Court has constitutionalized the idea that algorithmic curation and moderation are expressive acts protected by the First Amendment. It means that when the EU regulates digital platforms, U.S. actors can credibly argue that such rules compel or restrict speech, even if the EU frames them as consumer protection or competition law.
This is why the DSA, DMA, and the Digital Fairness Act, though not about speech in the European legal taxonomy, can be reframed in the U.S. as speech regulation.
From a geopolitical and regulatory standpoint, the U.S. constitutional frame treats platform governance as a civil liberties matter. Moody v. NetChoice strengthens the U.S. position that regulatory pressure on platform design implicates fundamental rights.
This doctrinal divergence explains why U.S. officials reacted so sharply to EU actions, why individual regulators may now be targeted politically, but also why future disputes will likely escalate beyond trade or data protection forums.
DISCLAIMER: The analysis presented here is provided for informational and educational purposes only. It does not express support for, or opposition to, any government, regulatory authority, political position, or policy approach. The objective is to assist risk, compliance, legal, and governance professionals in understanding evolving regulatory, legal, and geopolitical developments that may affect their professional responsibilities.
This content is intended to facilitate informed decision making by highlighting structural trends, regulatory interactions, and potential areas of operational impact. It does not constitute legal advice, policy advocacy, or an endorsement of any particular regulatory framework or political position. The perspectives discussed reflect an analytical assessment of publicly available information, and should be interpreted in the context of risk awareness, compliance preparedness, and strategic foresight only.
Important notes:
1. On 25 August 2023, the Digital Services Act came into effect for very large online platforms and very large online search engines.
It becomes fully applicable to other entities on 17 February 2024.
2. The Digital Services Act package includes:
a. The Digital Services Act,
b. The Digital Markets Act.
Both legislative acts were adopted by the Council and the European Parliament in 2022, and we have the final text at the "Links" page.
The Digital Markets Act (DMA) affects “gatekeeper platforms” like Google, Amazon and Meta, and covers the need for user consent before processing personal data for targeted advertising. It is interesting that most of the companies that are affected by the Digital Markets Act and the Digital Services Act are based in the United States of America.
If you believe that the sanctions for GDPR violations are very strict (up to 4% of global annual turnover), you will be surprised with the sanctions for Digital Services Act violations (up to 6% of global annual turnover), and the sanctions for Digital Markets Act violations (up to 10% of global annual turnover, or up to 20% in case of repeat offence.)
2 July 2025 - Commission adopts delegated act on data access under the Digital Services Act.
The delegated act on data access under the Digital Services Act enables researchers to obtain unprecedented access to platforms’ internal data to contribute to a safer online world.
On 2 July 2025, the Commission published a delegated act outlining rules granting access to data for qualified researchers under the Digital Services Act (DSA). This delegated act enables access to the internal data of very large online platforms (VLOPs) and search engines (VLOSEs) to research the systemic risks and on the mitigation measures in the European Union.
The delegated act on data access clarifies the procedures for VLOPs and VLOSEs to share data with vetted researchers, including data formats and requirements for data documentation. Moreover, the delegated act sets out which information Digital Services Coordinators (DSCs), VLOPs and VLOSEs must make public to facilitate vetted researchers' applications to access relevant datasets.
With the adoption of the delegated act, the Commission will launch the DSA data access portal where researchers interested in accessing data under the new mechanism can find information and exchange with VLOPs, VLOSEs and DSCs on their data access applications.

2 July 2025 - Commission adopts delegated act on data access under the Digital Services Act.
20 February 2025 - New best-practice election toolkit on the Digital Services Act, from the European Commission.
The new elections toolkit provides practical details on how the Digital Services Act (DSA) Election Guidelines can be applied during electoral processes. Aimed at national regulators – known as Digital Services Coordinators – the toolkit provides advice and guidance on how they can be implemented in practice.
The DSA elections toolkit summarises the best approaches and practices that national regulators have pioneered over the last year to mitigate risks on Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) during elections. The toolkit is a set of best practices that help Member States’ regulators in their work with VLOPs and VLOSEs to address risks like hate speech, online harassment, and manipulation of public opinion, including those involving artificial intelligence-generated content and impersonation.
The toolkit offers recommended practices and suggestions in four key areas:
- stakeholder management,
- communication and media literacy,
- incident response, and
- monitoring and analysis of election-related risks.
The DSA Election Toolkit builds on the Election Guidelines for VLOPs and VLOSEs published in March 2024, as well as on the experience gained in the implementation of the Code of Practice on Disinformation and the DSA election integrity readiness dialogues that the Commission has held with public authorities, VLOPs, VLOSEs and other stakeholders since September 2023.
13 February 2025 - The Commission endorses the integration of the voluntary Code of Practice on Disinformation into the Digital Services Act.
The Commission and the European Board for Digital Services has endorsed the integration of the voluntary Code of Practice on Disinformation into the framework of the Digital Services Act (DSA). This integration will make the Code a benchmark for determining platforms’ compliance with the DSA.
In January 2025, the signatories of the Code – including companies designated under the DSA as Very Large Online Platforms and Search Engines (VLOPEs), such as Google, Meta, Microsoft and TikTok – submitted all the necessary documents supporting their request for its conversion into a Code of Conduct under the DSA.
To be recognised as a DSA Voluntary Code of Conduct, the Code needs to fulfil the criteria set out in the Digital Services Act. The Commission and the Board adopted separate positive assessments in this regard, endorsing the official integration of the Code into the DSA framework.
With its integration, full adherence to the Code may be considered as an appropriate risk mitigation measure for signatories designated as VLOPs and VLOSEs under the DSA. As such, the Code will become a significant and meaningful benchmark for determining DSA compliance. Compliance with the commitments under the Code will also be part of the annual independent audit, which these platforms are subject to under the DSA.

20 January 2025 - The Commission welcomes the integration of the revised Code of conduct on countering illegal hate speech online into the Digital Services Act.
The Commission and the European Board for Digital Services welcome the integration of the revised ‘Code of conduct on countering illegal hate speech online +' into the framework of the Digital Services Act (DSA), which encourages voluntary codes of conduct to tackle risks online.
The Code of conduct+, which builds on the 2016 on the initial Code of conduct on countering illegal hate speech online was signed by Dailymotion, Facebook, Instagram, Jeuxvideo.com, LinkedIn, Microsoft hosted consumer services, Snapchat, Rakuten Viber, TikTok, Twitch, X and YouTube.
The Code of conduct+ will strengthen the way online platforms deal with content that EU and national laws define as illegal hate speech. The integrated Code of conduct will facilitate compliance with and the effective enforcement of the DSA when it comes to risks of dissemination of illegal content on their services.
Following this integration, online platforms who are designated under the DSA can adhere to the Code of conduct+ to demonstrate their compliance with the DSA obligation to mitigate the risk of the dissemination of illegal content on their services. The compliance with the Code of conduct+ commitments will be part of the annual independent audit which these platforms are subject to under the DSA and which contributes to reinforcing the platforms transparency and accountability.
20 January 2025 - The Code of conduct on countering illegal hate speech online +, integrated into the Digital Services Act.
https://digital-strategy.ec.europa.eu/en/library/code-conduct-countering-illegal-hate-speech-online
4 November 2024 - Implementing Regulation standardising the format, content, and reporting periods for transparency reports under the Digital Services Act (DSA).
The Regulation establishes uniform reporting templates and periods. Providers will have to start collecting data according to the Implementing Regulation as of 1 July 2025, with the first harmonised reports due at the beginning of 2026. The reporting periods for providers of VLOPs and VLOSEs will now be aligned, depending on their dates of designation.
To ensure consistency between the transparency tools of the DSA, the requirements for submitting statements of reasons to the DSA Transparency Database will be updated to be aligned with the data categories in the Implementing Regulation. Providers will have to submit statements of reasons according to the new requirements starting from 1 July 2025, same as for the transparency reporting templates.
26 March 2024 - Commission publishes guidelines under the DSA for the mitigation of systemic risks online for elections.
The European Commission has published guidelines on recommended measures to Very Large Online Platforms and Search Engines to mitigate systemic risks online that may impact the integrity of elections, with specific guidance for the upcoming European Parliament elections in June.
Under the Digital Services Act (DSA), designated services with more than 45 million active users in the EU have the obligation to mitigate the risks related to electoral processes, while safeguarding fundamental rights, including the right to freedom of expression.
These guidelines recommend mitigation measures and best practices to be undertaken by Very Large Online Platforms and Search Engines before, during, and after electoral events, such as to:
1. Reinforce their internal processes, including by setting up internal teams with adequate resources, using available analysis and information on local context-specific risks and on the use of their services by users to search and obtain information before, during and after elections, to improve their mitigation measures.
2. Implement elections-specific risk mitigation measures tailored to each individual electoral period and local context. Among the mitigation measures included in the guidelines, Very Large Online Platforms and Search Engines should promote official information on electoral processes, implement media literacy initiatives, and adapt their recommender systems to empower users and reduce the monetisation and virality of content that threatens the integrity of electoral processes. Moreover, political advertising should be clearly labelled as such, in anticipation of the new regulation on the transparency and targeting of political advertising.
3. Adopt specific mitigation measures linked to generative AI: Very Large Online Platforms and Search Engines whose services could be used to create and/or disseminate generative AI content should assess and mitigate specific risks linked to AI, for example by clearly labelling content generated by AI (such as deepfakes), adapting their terms and conditions accordingly and enforcing them adequately.
4. Cooperate with EU level and national authorities, independent experts, and civil society organisations to foster an efficient exchange of information before, during and after the election and facilitate the use of adequate mitigation measures, including in the areas of Foreign Information Manipulation and Interference (FIMI), disinformation and cybersecurity. Adopt specific measures, including an incident response mechanism, during an electoral period to reduce the impact of incidents that could have a significant effect on the election outcome or turnout.
5. Assess the effectiveness of the measures through post-election reviews. Very Large Online Platforms and Search Engines should publish a non-confidential version of such post-election review documents, providing opportunity for public feedback on the risk mitigation measures put in place.
The guidelines include specific measures ahead of the upcoming European elections. Given their unique cross-border and European dimension, Very Large Online Platforms and Search Engines should ensure that sufficient resources and risk mitigation measures are available and distributed in a way that is proportionate to the risk assessments. The guidelines also encourage close cooperation with the European Digital Media Observatory (EDMO) Task Force on the 2024 European elections.
Next Steps
The specific mitigation measures that a Very Large Online Platform or Search Engine should take depend on the specificities of their service and on their risk profile. The guidelines represent best practices for mitigating risks related to electoral processes at this moment in time.
As such, Very Large Online Platforms and Search Engines which do not follow these guidelines must prove to the Commission that the measures undertaken are equally effective in mitigating the risks. Should the Commission receive information casting doubt on the suitability of such measures, it can request further information or start formal proceedings under the Digital Services Act.
To add an additional element of readiness, the Commission plans a stress test with relevant stakeholders at the end of April to exercise the most effective use of the instruments and the cooperative mechanisms that have been put in place.
26 September 2023 - The European Commission has launched the DSA Transparency Database.
Under the DSA, all providers of hosting services are required to provide users with clear and specific information, so-called statements of reasons, whenever they remove or restrict access to certain content.
Article 17 DSA requires providers of hosting services to provide affected recipients of the service with clear and specific reasons for restrictions on content that is allegedly illegal or incompatible with the provider’s terms and conditions. In other words, providers of hosting services need to inform their users of the content moderation decisions they take and explain the reasons behind those decisions. A statement of reasons is an important tool to empower users to understand and potentially challenge content moderation decisions taken by providers of hosting services.
The new database will collect these statements of reasons in accordance with Article 24(5) of the DSA. This makes this database a first-of-its-kind regulatory repository, where data on content moderation decisions taken by providers of online platforms active in the EU are accessible to the general public at an unprecedented scale and granularity, enabling more online accountability.
Only Very Large Online Platforms (VLOPs) need to submit data to the database as part of their compliance with DSA already now. From 17 February 2024, all providers of online platforms, with the exception of micro and small enterprises, will have to submit data on their content moderation decisions.
Thanks to the Transparency Database users can view summary statistics (currently in beta version), search for specific statements of reasons, and download data. The Commission will add new analytics and visualisation features in the coming months and in the meantime welcomes any feedback on its current configuration.
The source code of the database is publicly available. Together with the Code of Practice on Disinformation, as well as further transparency enhancing measures under the DSA, the new database allows all users to act in a more informed manner on the spread of illegal and harmful content online.
25 April 2023 - The European Commission adopted the first designation decisions under the Digital Services Act (DSA).
The European Commission designated 17 Very Large Online Platforms (VLOPs) and 2 Very Large Online Search Engines (VLOSEs) that reach at least 45 million monthly active users.
Very Large Online Platforms:
- Alibaba AliExpress
- Amazon Store
- Apple AppStore
- Booking.com
- Facebook
- Google Play
- Google Maps
- Google Shopping
- Instagram
- LinkedIn
- Pinterest
- Snapchat
- TikTok
- Twitter
- Wikipedia
- YouTube
- Zalando
Very Large Online Search Engines:
- Bing
- Google Search
Following their designation, the companies will now have to comply, within four months, with the full set of new obligations under the DSA. These aim at empowering and protecting users online, including minors, by requiring the designated services to assess and mitigate their systemic risks and to provide robust content moderation tools.
This includes:
More user empowerment:
- Users will get clear information on why they are recommended certain information and will have the right to opt-out from recommendation systems based on profiling;
- Users will be able to report illegal content easily and platforms have to process such reports diligently;
- Advertisements cannot be displayed based on the sensitive data of the user (such as ethnic origin, political opinions or sexual orientation);
- Platforms need to label all ads and inform users on who is promoting them;
- Platforms need to provide an easily understandable, plain-language summary of their terms and conditions, in the languages of the Member States where they operate.
Strong protection of minors:
- Platforms will have to redesign their systems to ensure a high level of privacy, security, and safety of minors;
- Targeted advertising based on profiling towards children is no longer permitted;
- Special risk assessments including for negative effects on mental health will have to be provided to the Commission 4 months after designation and made public at the latest a year later;
- Platforms will have to redesign their services, including their interfaces, recommender systems, terms and conditions, to mitigate these risks.
More diligent content moderation, less disinformation:
- Platforms and search engines need to take measures to address risks linked to the dissemination of illegal content online and to negative effects on freedom of expression and information;
- Platforms need to have clear terms and conditions and enforce them diligently and non-arbitrarily;
- Platforms need to have a mechanism for users to flag illegal content and act upon notifications expeditiously;
- Platforms need to analyse their specific risks, and put in place mitigation measures – for instance, to address the spread of disinformation and inauthentic use of their service.
More transparency and accountability:
- Platforms need to ensure that their risk assessments and their compliance with all the DSA obligations are externally and independently audited;
- They will have to give access to publicly available data to researchers; later on, a special mechanism for vetted researchers will be established;
- They will need to publish repositories of all the ads served on their interface;
- Platforms need to publish transparency reports on content moderation decisions and risk management.
By 4 months after notification of the designated decisions, the designated platforms and search engines need to adapt their systems, resources, and processes for compliance, set up an independent system of compliance and carry out, and report to the Commission, their first annual risk assessment.
Risk assessment
Platforms will have to identify, analyse and mitigate a wide array of systemic risks ranging from how illegal content and disinformation can be amplified on their services, to the impact on the freedom of expression and media freedom. Similarly, specific risks around gender-based violence online and the protection of minors online and their mental health must be assessed and mitigated. The risk mitigation plans of designated platforms and search engines will be subject to an independent audit and oversight by the Commission.
19 October 2022 - We have the final text of Regulation (EU) 2022/2065 on a Single Market For Digital Services (Digital Services Act).
The Digital Services Act Regulation applies to providers of intermediary services, and in particular intermediary services consisting of services known as ‘mere conduit’, ‘caching’ and ‘hosting’ services, given that the exponential growth of the use made of those services, mainly for legitimate and socially beneficial purposes of all kinds, has also increased their role in the intermediation and spread of unlawful or otherwise harmful information and activities.
In practice, certain providers of intermediary services intermediate in relation to services that may or may not be provided by electronic means, such as remote information technology services, transport, accommodation or delivery services. This Regulation applies only to intermediary services and not affect requirements set out in Union or national law relating to products or services intermediated through intermediary services, including in situations where the intermediary service constitutes an integral part of another service which is not an intermediary service as recognised in the case-law of the Court of Justice of the European Union.
In order to ensure the effectiveness of the rules laid down in this Regulation and a level playing field within the internal market, those rules should apply to providers of intermediary services irrespective of their place of establishment or their location, in so far as they offer services in the Union, as evidenced by a substantial connection to the Union.
Such a substantial connection to the Union should be considered to exist where the service provider has an establishment in the Union or, in the absence of such an establishment, where the number of recipients of the service in one or more Member States is significant in relation to the population thereof, or on the basis of the targeting of activities towards one or more Member States.
The targeting of activities towards one or more Member States can be determined on the basis of all relevant circumstances, including factors such as the use of a language or a currency generally used in that Member State, or the possibility of ordering products or services, or the use of a relevant top-level domain.
The targeting of activities towards a Member State could also be derived from the availability of an application in the relevant national application store, from the provision of local advertising or advertising in a language used in that Member State, or from the handling of customer relations such as by providing customer service in a language generally used in that Member State.
This Regulation fully harmonises the rules applicable to intermediary services in the internal market with the objective of ensuring a safe, predictable and trusted online environment, addressing the dissemination of illegal content online and the societal risks that the dissemination of disinformation or other content may generate, and within which fundamental rights enshrined in the Charter are effectively protected and innovation is facilitated.
Accordingly, Member States should not adopt or maintain additional national requirements relating to the matters falling within the scope of this Regulation, unless explicitly provided for in this Regulation, since this would affect the direct and uniform application of the fully harmonised rules applicable to providers of intermediary services in accordance with the objectives of this Regulation.
In order to achieve the objective of ensuring a safe, predictable and trustworthy online environment, for the purpose of this Regulation the concept of ‘illegal content’ should broadly reflect the existing rules in the offline environment. In particular, the concept of ‘illegal content’ should be defined broadly to cover information relating to illegal content, products, services and activities.
In particular, that concept should be understood to refer to information, irrespective of its form, that under the applicable law is either itself illegal, such as illegal hate speech or terrorist content and unlawful discriminatory content, or that the applicable rules render illegal in view of the fact that it relates to illegal activities.
Illustrative examples include the sharing of images depicting child sexual abuse, the unlawful non-consensual sharing of private images, online stalking, the sale of non-compliant or counterfeit products, the sale of products or the provision of services in infringement of consumer protection law, the non-authorised use of copyright protected material, the illegal offer of accommodation services or the illegal sale of live animals.
In contrast, an eyewitness video of a potential crime should not be considered to constitute illegal content, merely because it depicts an illegal act, where recording or disseminating such a video to the public is not illegal under national or Union law. In this regard, it is immaterial whether the illegality of the information or activity results from Union law or from national law that is in compliance with Union law and what the precise nature or subject matter is of the law in question.
The concept of ‘dissemination to the public’, as used in this Regulation, should entail the making available of information to a potentially unlimited number of persons, meaning making the information easily accessible to recipients of the service in general without further action by the recipient of the service providing the information being required, irrespective of whether those persons actually access the information in question.
Accordingly, where access to information requires registration or admittance to a group of recipients of the service, that information should be considered to be disseminated to the public only where recipients of the service seeking to access the information are automatically registered or admitted without a human decision or selection of whom to grant access.
Interpersonal communication services, such as emails or private messaging services, fall outside the scope of the definition of online platforms as they are used for interpersonal communication between a finite number of persons determined by the sender of the communication.
However, the obligations set out in this Regulation for providers of online platforms may apply to services that allow the making available of information to a potentially unlimited number of recipients, not determined by the sender of the communication, such as through public groups or open channels. Information should be considered disseminated to the public within the meaning of this Regulation only where that dissemination occurs upon the direct request by the recipient of the service that provided the information.
Intermediary services span a wide range of economic activities which take place online and that develop continually to provide for transmission of information that is swift, safe and secure, and to ensure convenience of all participants of the online ecosystem.
For example, ‘mere conduit’ intermediary services include generic categories of services, such as internet exchange points, wireless access points, virtual private networks, DNS services and resolvers, top-level domain name registries, registrars, certificate authorities that issue digital certificates, voice over IP and other interpersonal communication services, while generic examples of ‘caching’ intermediary services include the sole provision of content delivery networks, reverse proxies or content adaptation proxies. Such services are crucial to ensure the smooth and efficient transmission of information delivered on the internet.
Examples of ‘hosting services’ include categories of services such as cloud computing, web hosting, paid referencing services or services enabling sharing information and content online, including file storage and sharing.
Intermediary services may be provided in isolation, as a part of another type of intermediary service, or simultaneously with other intermediary services. Whether a specific service constitutes a ‘mere conduit’, ‘caching’ or ‘hosting’ service depends solely on its technical functionalities, which might evolve in time, and should be assessed on a case-by-case basis.
Providers of intermediary services should also be required to designate a single point of contact for recipients of services, enabling rapid, direct and efficient communication in particular by easily accessible means such as telephone numbers, email addresses, electronic contact forms, chatbots or instant messaging. It should be explicitly indicated when a recipient of the service communicates with chatbots. Providers of intermediary services should allow recipients of services to choose means of direct and efficient communication which do not solely rely on automated tools. Providers of intermediary services should make all reasonable efforts to guarantee that sufficient human and financial resources are allocated to ensure that this communication is performed in a timely and efficient manner.
Providers of intermediary services that are established in a third country and that offer services in the Union should designate a sufficiently mandated legal representative in the Union and provide information relating to their legal representatives to the relevant authorities and make it publicly available.
In order to comply with that obligation, such providers of intermediary services should ensure that the designated legal representative has the necessary powers and resources to cooperate with the relevant authorities.
This could be the case, for example, where a provider of intermediary services appoints a subsidiary undertaking of the same group as the provider, or its parent undertaking, if that subsidiary or parent undertaking is established in the Union. However, it might not be the case, for instance, when the legal representative is subject to reconstruction proceedings, bankruptcy, or personal or corporate insolvency. That obligation should allow for the effective oversight and, where necessary, enforcement of this Regulation in relation to those providers.
It should be possible for a legal representative to be mandated, in accordance with national law, by more than one provider of intermediary services. It should be possible for the legal representative to also function as a point of contact, provided the relevant requirements of this Regulation are complied with.
4 October 2022 - The Council approved the Digital Services Act (DSA).
Following the Council’s approval, as it has also been approved by the European Parliament, the legislative act was adopted.
Next step: After being signed by the President of the European Parliament and the President of the Council, it will be published in the Official Journal of the European Union, and will start to apply fifteen months after its entry into force.
5 July 2022 - Text of the European Parliament legislative resolution on the proposal for a regulation on a Single Market For Digital Services (Digital Services Act).
This Regulation fully harmonises the rules applicable to intermediary services in the internal market with the objective to ensure a safe, predictable and trusted online environment, addressing the dissemination of illegal content online and the societal risks that the dissemination of disinformation or other content may generate, where fundamental rights enshrined in the Charter are effectively protected and innovation is facilitated.
Accordingly, Member States should not adopt or maintain additional national requirements on the matters falling within the scope of this Regulation, unless explicitly provided for in this Regulation, since this would affect the direct and uniform application of the fully harmonised rules applicable to providers of intermediary services in accordance with the objectives of this Regulation.
This does not preclude the possibility to apply other national legislation applicable to providers of intermediary services, in compliance with Union law, where the provisions of national law which pursue other legitimate public interest objectives than those pursued by this Regulation.

This website is developed and maintained by Cyber Risk GmbH as part of its professional activities in the fields of risk management and regulatory compliance.
Cyber Risk GmbH specializes in supporting organizations in understanding, navigating, and implementing complex European, U.S., and international risk related regulatory frameworks.
Content is produced and maintained under the professional responsibility of George Lekatis, General Manager of Cyber Risk GmbH, a well known expert in risk management and compliance. He also serves as General Manager of Compliance LLC, a company incorporated in Wilmington, NC, with offices in Washington, DC, providing risk and compliance training in 58 countries.