Systemic risk assessments in the EU and legal discovery in the US are revealing new insights around how platform design and recommender systems can contribute to risk and mitigation, with a particular focus on risks and harms impacting minors.
KGI’s latest paper “Measuring Risk: What EU Risk Assessments and US Litigation Reveal About Meta and TikTok” examines what can be learned by reading these two bodies of emerging evidence side-by-side. The paper identifies critical gaps in how Meta and TikTok communicate publicly about risks and the actual steps they take to mitigate risks based on evidence and data.
Regulatory processes in the EU and legal frameworks in the US have notable differences in scope and approach. However, they converge on a key concern: potential risks to minors. Risk assessments and investigations under the EU’s Digital Services Act (DSA) as well as complaints in US courts have identified overlapping concerns related to compulsive use and addiction-like behaviors, sleep deprivation, self-harm, eating disorders, and other mental and physical health impacts for minors. Each regime produces distinct evidence and disclosures, creating opportunities for cross-jurisdictional learning.
KGI’s paper finds that there are significant gaps between the risk mitigations that Meta and TikTok describe in their DSA risk assessments and the actual effectiveness of these measures revealed through internal documents in US litigation. While the DSA has created an obligation for platforms to identify and mitigate systemic risks in Europe, the first two years of risk assessments rely heavily on high-level company descriptions of policies, tools, and user controls. Assessments provide extremely limited detail into whether any of these interventions meaningfully reduce harm, particularly for minors. By contrast, US litigation is surfacing previously unreleased internal platform data, experiments, and deliberations that reveal how platforms internally measure risk and define acceptable trade-offs related to risk, engagement, and revenue. But US litigation is largely reactive and limited to the facts of each specific case.
For example, internal company data released in US litigation shows that key safety mitigations – including screentime management tools, take a break reminders, parental controls, among others – suffer from extremely low adoption rates, often below 2% of minor users. Internal documents also suggest the design of these features may undermine effectiveness: TikTok leadership initially imposed “guardrail” metrics requiring that new screentime tools reduce usage by no more than 5%, while Meta’s internal projections accurately predicted that 99% of teens would not use optional opt-in take a break features.
The evidence emerging from DSA systemic risk assessments and US platform litigation underscores a central gap in current approaches to platform governance: risks are increasingly well-described, but mitigations are rarely communicated using rigorous, outcome-oriented data and evidence.
Addressing this gap will require aligning platform expectations with rigorous research and evaluation. Systemic risk assessments in the EU should move beyond descriptive inventories of mitigations toward transparent, metrics-driven statements of risk and mitigation effectiveness. While insights generated through litigation are still emerging and incomplete, they highlight the types of data, methods, and benchmarks that should inform more credible, forward-looking platform governance.
Read the full paper here.