Commentary /

Introducing Modular Legislative Components to Help Spur Algorithmic Feeds that Put People First

A new legislative toolkit from the Knight-Georgetown Institute equips lawmakers with modular components to support the development of legislation that encourages algorithmic feeds that put people first.

Lawmakers across the country are actively advancing legislation related to algorithmic recommender system design. Nearly a hundred state and federal bills introduced since 2023 address algorithmic systems in the context of online youth safety, “shadow-banning,” “addictive” designs, and more.

Following the Knight-Georgetown Institute’s recent report, Better Feeds: Algorithms that Put People First, KGI has released a new toolkit with legislative components to help lawmakers craft bills that support algorithmic feeds designed for what users value. These modular legislative components are based on the report’s guidelines, developed by KGI’s expert working group of researchers, technologists, and policy leaders with deep expertise in algorithmic systems. 

Rather than constituting a stand-alone bill, these modular components offer a range of concepts  that could be adopted separately or together, including definitions and standalone provisions. They may be of particular interest to state legislators involved in work on broader online kids safety legislation, addictive feeds bills, and/or platform accountability.

The modular legislative components include: 

  • Definitions, including key terms like “engagement,” “user-provided data,” and more
  • Design transparency measures covering disclosure of key algorithm inputs and metrics
  • User choices and defaults to give users options that they value 
  • Heightened protections for covered minors 
  • Long-term assessment requirements to encourage covered platforms to design for users’ long-term interests

State laws seeking to address algorithmic harms have faced an array of court challenges. While there is no guarantee that the constitutionality of legislative efforts incorporating these modular components would be upheld, this toolkit  was crafted in recognition of the potential for regulation to implicate speech rights under the First Amendment and platform liability immunity under Section 230.

The Problem: Algorithm Harms and a False Choice

Every day, billions of people scroll through social media feeds, search results, and streaming recommendations that shape what they see, read, and watch. Algorithmic systems determine what to show each user, wielding enormous influence over our online experiences and, increasingly, our lives offline. While these algorithms have fueled some of the world’s most successful businesses, they have also sparked intense debate about risks to kids, adults, and  society as a whole. 

Chronological feeds and blanket bans on personalization are common go-to solutions, but they have important limitations and can reward spammer behavior. Better designs exist that align with users’ genuine long-term interests rather than exploiting their short-term impulses. Efforts to address the design of algorithms will continue to expand in 2025 and beyond, highlighting the importance of adopting fit-for-purpose policy approaches. KGI’s legislative toolkit of modular components aims to fit this need. 

Resources

Modular Legislative Components

Better Feeds Report

Better Feeds US Policy Brief

Lawfare Article on Better Feeds

Close