Social media and generative AI chatbot companies are facing an expanding wave of litigation in federal and state courts across the United States, with hundreds of lawsuits involving allegations of addiction and compulsive use, unwanted contact, privacy violations, and other harms associated with these platforms.
While courts previously dismissed many claims against technology companies under Section 230 of the Communications Decency Act or the First Amendment, they are increasingly allowing these cases to proceed to discovery and trial. Court decisions – particularly at the remedy phase – will impact the design, governance, and accountability of social media and AI chatbot companies.
While monetary damages will likely feature in social media and AI chatbot cases, damages alone are unlikely to improve user safety. Previous remedies in tobacco, opioid, e-cigarette, and civil rights litigation demonstrates that durable reform in company conduct typically combines injunctive relief with monetary damages.
Designing Technology Remedies: Lessons for Social Media and Generative AI Chatbot Litigation seeks to meet this pivotal moment in technology governance by offering a practical, evidence-based framework to help litigators, courts, and policymakers craft effective and enforceable remedies that address specific social media and AI chatbot harms.
Developed by the Knight-Georgetown Institute (KGI), Tech Justice Law (TJL), and the USC Marshall Neely Center for Ethical Leadership and Decision Making (USC Neely Center), the framework draws from a systematic review of nearly 100 prior remedies, including Federal Trade Commission (FTC) consent decrees, public health litigation, civil rights settlements, and technology-related cases. These findings were assessed through stakeholder interviews and a multidisciplinary workshop involving state attorneys general, plaintiffs’ attorneys, technologists, researchers, and legal scholars.
Designing Technology Remedies organizes recommended remedies into three complementary categories:
- Harm Prevention remedies change how companies design, develop, and deploy their products. Recommended remedies include prohibitions on unsafe design features; safer default settings for minors and other users; meaningful limits on data collection, retention, and use; restrictions on targeted advertising to minors; data deletion and disgorgement where data was unlawfully obtained; and age assurance safeguards in appropriate circumstances.
- Harm Mitigation remedies give users tools to report, avoid, and respond to harmful experiences. Recommended remedies include accessible and effective user and parental controls, measured against outcome-based metrics; effective account and data deletion; data portability; and user-reporting systems with concrete response timelines, escalation processes, and communication to the reporting user.
- Governance remedies change how companies make decisions, exercise leadership oversight, enable independent scrutiny, and enforce compliance. Recommended remedies include a senior compliance officer with cross-functional authority; alignment of organizational goals with remedy objectives; an independent external monitor with audit authority; internal and external measurement, including universal-holdout experiments and independently administered user surveys; and transparency mechanisms, including publicly accessible document repositories and safe-harbor for independent research.
Effective remedies will combine all three categories. Prevention, mitigation, and governance are mutually reinforcing; no single category is sufficient on its own for addressing harm.
The Designing Technology Remedies framework is intentionally flexible rather than prescriptive. Not every remedy will apply in every case. The appropriate approach will depend on the specific harms, the nature of the defendant’s product, and the procedural posture of the litigation. At the same time, the framework identifies several commonly used remedies that may underperform, including standalone employee training requirements or consent-based requirements for targeted data use or data sharing.
The cases now moving through US courts – and the remedies they may produce – will have implications far beyond the parties involved. They will catalyze future litigation, inform regulatory approaches to social media and AI companies, and help to shape technology policy in the US and globally. Litigation presents a critical opportunity to translate available research and evidence into durable, enforceable change.
Designing Technology Remedies seeks to meet this moment by providing practical, evidence-based tools for crafting remedies that place user safety and well-being at the center, while effectively addressing harms associated with social media and AI chatbot companies.
Read the full framework here.