How is your DSA today?
The European Commission sprays RFIs

How is your DSA today?

Trends in enforcement; testing out-of-court settlements; shadowbanning ban

It’s been about six months since the Digital Services Act (DSA) — Europe's sweeping online safety and content moderation law — came into full force, and while the European Commission (EC) has been busy kicking off investigations, the enforcement infrastructure has been slow to materialise. That’s partly because the DSA specifies quite A LOT of new institutions and mechanisms to be created.

Is it just me, or did most commenters just gloss over this element while debating the initial drafts of the DSA?

Before we look at the status of that new bureaucracy and the challenges it will face, it’s worth mining the Commission’s direct actions so far on the largest platforms, the so-called VLOPs and VLOSEs, for trends in DSA enforcement priorities.

Many shots across the bow

By my count the EC has fired off 42 requests for information (RFIs) or investigation notices, including 7 each to Facebook and Instagram (!). Here’s the breakdown of what topics they included (each had several):

Amazon faces 3 requests covering the dissemination of illegal products, the transparency of its recommender system, data access for researchers, and compliance with the requirement to maintain a public ad repository.

Clearly the EC is going to be very busy wading through all those responses. Meanwhile, it has opened formal proceedings against 4 VLOPs:

  • Meta: first, on deceptive ads, political content visibility, and real-time third-party checking ahead of EU elections; and second, on protection of minors, focusing on addictive design and age verification. Amazingly, in spite of the EC’s ongoing scrutiny and widespread opposition from academics, Meta still decided to shut down its CrowdTangle tool, widely used by researchers to study disinformation on the platform.

It feels like the EC is spreading itself pretty thin with such a wide-ranging set of investigations. Its approach suggests a focus on visible breaches first (e.g., incomplete notice & consent mechanisms, failure to provide data access), followed by findings on more complex issues like disinformation and insufficient protections for minors.

But where is the infrastructure?

The DSA mandates several new regulatory bodies and mechanisms. First, each EU country was meant to appoint a regulator — the Digital Services Coordinator (DSC) — by 24 Feb 2024. Twelve countries were late, and in late July the European Commission opened formal infringement proceedings against six of them. As of today, Belgium, Poland, Slovakia still seem to be without a confirmed DSC.

Together, the DSCs make up the European Board for Digital Services.

The DSCs are needed both to enforce the DSA in relation to all but the largest platforms, and to appoint and certify entities that can act as out-of-court settlement services (ODS) and as Trusted Flaggers.

ODS bodies allow users to appeal platforms’ content moderation decisions without going to court. The ODS then determines whether the decision complied with the platform’s own policy or not. Its decisions are not binding on the platforms, but they are obliged to engage with the ODS process in good faith.

Trusted Flaggers are certified experts in detecting certain types of illegal content, whose reports have to be prioritised by platforms for action.

Without these two novel mechanisms, the supposedly streamlined and scaled approach to policing content moderation decisions doesn’t work.

Ironically, while the Commission is chasing up the laggards among the countries, it seems to have forgotten to set up the central register of ODS bodies which it is supposed to operate (or at least I couldn't find it). Admittedly, it wouldn’t be much of a register yet, as there seem to be only two entities recently certified — User Rights in Germany and ADROIT in Malta. And so far there are only four approved Trusted Flaggers (2 in Austria, 1 in Finland, Sweden) — and at least that list is available officially.

What could possibly go wrong?

The ODS mechanism is unproven and risky. It’s a trade-off between fairness and effectiveness. The platforms make millions of content moderation decisions every day, the (growing) majority of them via automated systems. The DSA requires users to be informed about why their content was removed or restricted, and gives them the right to appeal those decisions. This can’t be done at scale via the courts, hence the the idea of ODS bodies to expedite settlement — kind of like arbitration but not binding.

It’s a big innovation which could strengthen consumers’ ability to exercise their rights, while also reducing the burden on platforms (and the legal system) of having to litigate thousands of cases in the courts. But…

It’s a brave organisation that decides to become an ODS — I tip my hat. They have to be experts in both the domain (terrorism, CSAM, etc) and each platform’s policies. They need to be multi-lingual. They can’t charge consumers for appeals (except a nominal amount to deter spam), and their fees from platforms are fixed at several hundred Euros per claim. Oh, and they will end up right in the middle of the most difficult content moderation decisions, many of which have become highly politicised.

What will the ODS do if they are flooded with appeals? How will they triage out appeals filed just to gum up the system (which will happen), or to harass other users, or to make a political point about a platform’s policies? Will platforms be expected to address every single content moderation decision, no matter how inconsequential, or will some practical de minimis threshold (views? shares?) emerge? Will each ODS become known for being more pro-consumer or pro-platform, or more right- or left-wing, or more or less LGBTQ-friendly, creating a market for cynical jurisdiction-shopping?

Alice Hunsberger has done a great job laying out out in practical terms how Article 21 — and its unintended consequences — might play out.

Not just content takedowns, stupid — the shadowbanning dilemma

My last note on the DSA involves a little-reported court case, brought by academic and privacy activist Danny Mekić in Amsterdam against X/Twitter, which shines a light on the risks of taking the DSA’s transparency requirement too far.

Mekić had posted a tweet critical of the EU’s proposed CSAM regulation. X’s moderation system flagged this (erroneously, as it soon admitted), and proceeded to restrict visibility of Mekić’s account by delisting it from search. X may have failed initially to notify him and to explain its moderation decision, allegedly in breach of DSA Article 17.

This follows a period where many users have complained about ‘arbitrary shadowbanning’ on X. Shadowbanning is one of the most effective ways platforms deal with trolls, spam, nonsense content, etc, by downgrading their reach without explicitly banning the account. While it can certainly be abused for political purposes (and Elon’s X is not above suspicion there), it’s an important tool for reducing the spread of digital crap.

It’s easy to overlook, but the DSA’s definition of content moderation is very broad indeed, including “measures taken that affect the availability, visibility, and accessibility” of content. And Article 17 requires a clear and specific statement of reasons for moderation actions including visibility restriction. As Paddy Leerssen points out in his excellent analysis of the case, this applies not just to search delisting, but likely also content demotion, and removal from recommender engines.

In this case, X argued that search delisting does not breach their Terms and is not in fact a moderation action. Amsterdam’s District Court disagreed, and ruled in Mekić’s favour, including a small damages award. The court’s interpretation is that restricting visibility of an account is a moderation decision, and that X should have provided a much more detailed notice to Mekić about its reasons and about the nature of the restriction.

The ruling effectively tells platforms they have to give trolls a warning when they are being restricted and a roadmap to circumvent that restriction in future.

As if platforms did not already have enough on their plate with notifying users at scale about content removals, they now have to consider whether every choice made by the algorithm to show or not show, boost or demote a piece of content becomes subject to the DSA’s obligation to notify.

Imagine how much fun the trolls and political hacks will have hassling the platforms to explain every content management decision, and then weaponising it against them? No one paints a more vivid picture of the potential mess than Mike Masnick in this piece.

It seems the only sensible reaction from platforms will be to do less content moderation and less investment in content quality. Surely not what the law intended?


This article first appeared on my Substack. If you like it and would like timely delivery of future posts directly in your inbox, please consider subscribing.


[1] A key driver for the Commission’s RFIs is the need to understand existing measures platforms are taking to protect children. In parallel, it is drafting guidance for release in 2025, and currently running a Call for Evidence, which will be open until 30 Sept 2024.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics