Fundación Maldita.es has contributed to the first ever annual report on the most prominent and recurrent systemic risks in the very large online platforms and search engines. The document must be produced every year by the European Board of Media Services, comprising the digital regulators in every member state, in collaboration with the European Commission as established by the Digital Services Act (DSA).
The report is focused on identifying not only the gravest risks stemming from the design and functioning of those online services, but also the best practices available to mitigate them, since DSA establishes that very large online platforms and search engines must correctly assess systemic risks and put in place “reasonable, proportionate and effective mitigation measures” specifically tailored to them.
In our contribution, which you can read in full below, we outline how dis/misinformation can be particularly harmful to the Europeans’ security and well-being during different kinds of situations including natural disasters and other emergencies, or elections. We also argue that medical and scientific health-related dis/misinformation creates profound and immediate individual and social harm, while often being amplified and monetized by platforms.
On illegal content, we warn about frequent violations of the Spanish laws including those protecting fundamental rights online or the ad regulations that prohibit scams. In Fundación Maldita.es, we have documented extensively how illegal content is easy to be found and seldom removed even after being specifically flagged to the platform.
Mitigating risk in platforms and search engines
On the side of risk mitigation measures by the platforms and search engines, our contribution explains why context-adding interventions such as fact-checking labels or information panels are particularly effective to empower users when they encounter misinformation, rather than removals of content. We also emphasize how Community Notes might be a useful tool, but how its current implementation in platforms such as X make it impossible for it to be effective against misinformation.
Finally, our contribution also identifies several risk factors including X’s blue checks, misinformation deployment through platform ads, and automated moderation systems. For each of those categories we offer supporting evidence as well as case studies by Fundación Maldita.es and others. Our full contribution is available below.