New Framework Charts Path for Effective Remedies in Social Media and AI Chatbot Litigation
A new framework from the Knight-Georgetown Institute, Tech Justice Law, and the USC Marshall School Neely Center provides a practical, evidence-based guide to help litigators, courts, and policymakers craft effective, enforceable remedies for harms linked to social media and generative AI chatbot companies, as lawsuits expand across the United States.
Washington, D.C. – Technology accountability is at a pivotal moment in the United States. Thousands of lawsuits against social media and generative AI chatbot companies are proceeding into discovery and trial for the first time. A new framework released today explains how improving online safety and driving lasting changes in company conduct will depend largely on the effectiveness of the remedies emerging from litigation.
Designing Technology Remedies: Lessons for Social Media and Generative AI Chatbot Litigation offers a practical, evidence-based framework to help litigators, courts, and policymakers identify and implement effective and enforceable remedies responsive to specific social media and AI chatbot harms – including allegations of addiction and compulsive use, unwanted contact, privacy violations, and related harms reflected in hundreds of ongoing lawsuits.
Developed by the Knight-Georgetown Institute (KGI), Tech Justice Law (TJL), and the USC Marshall Neely Center for Ethical Leadership and Decision Making (USC Neely Center), the framework is grounded in a systematic review of nearly 100 prior remedies, including Federal Trade Commission (FTC) consent decrees, public health litigation, civil rights settlements, and technology-related cases. The work was further informed by stakeholder interviews and multidisciplinary convenings involving the offices of state attorneys general, plaintiffs’ attorneys, technologists, researchers, and legal scholars.
Peter Chapman, Associate Director of the Knight-Georgetown Institute, said:
“US courts are emerging as central actors in shaping technology governance. Their decisions – especially at the remedy stage – will have significant consequences for how social media and AI products are designed, governed, and held accountable in the years ahead. Lessons from tobacco, pharmaceuticals, and e-cigarette litigation show that durable changes in company conduct typically require a combination of monetary damages and court-ordered changes to company behavior to protect consumers, underscoring the need for remedies that carefully address product design, accountability, transparency, and rigorous oversight.”
Meetali Jain, Executive Director of Tech Justice Law, said:
“We’re finally reaching a point in these cases where we’re moving past historic hurdles and into the remedies stage, creating an opportunity to think more thoughtfully about durable solutions to technology-facilitated problems. Whether it’s social media addiction or AI chatbots facilitating dangerous outcomes like suicidal ideation or delusions, many of these harms trace back to similar design features and incentives. Effective remedies, therefore, cannot stop at consumer-facing interventions alone – they must also address how these products are designed and governed, including moving away from design approaches centered on engagement maximization or sycophancy.”
Ravi Iyer, Managing Director of the USC Marshall Neely Center, said:
“For too long, technology companies have become wildly profitable by getting people to use their products more – whether they are making consumers’ lives better or not. Well designed remedies will require companies to compete on the value they provide to consumers, and should therefore be welcomed by families and long-sighted technologists alike.”
The Designing Technology Remedies framework organizes remedies into three complementary categories:
- Harm Prevention remedies to change how companies design, develop, and deploy their products.
- Harm Mitigation remedies to give users tools to report, avoid, and respond to harmful experiences.
- Governance remedies to change how companies make decisions, exercise leadership oversight, enable independent scrutiny, and enforce compliance.
The framework emphasizes that effective remedies will generally combine all three categories. Prevention, mitigation, and governance are mutually reinforcing, and no single category is sufficient on its own to address complex technology harms.
The framework is intentionally flexible rather than prescriptive. Not every remedy will apply in every case, and appropriate approaches will depend on the specific harms, the defendant’s product, and the stage of litigation. At the same time, the framework highlights commonly proposed remedies that may underperform, including standalone employee training requirements and limited consent-based approaches to data use.
As litigation continues to move through US courts, the remedies that emerge are expected to influence not only individual cases, but also broader regulatory and policy approaches to platform governance in the US and globally. These cases represent a critical opportunity to translate available research and evidence into durable, enforceable change that creates a safer online environment.
See the full framework here.
Media contact: Julie Anne Miranda-Brobeck, Communications Director at the Knight-Georgetown Institute, jm3885@georgetown.edu