Commentary /

Fixing the Feeds: A Policy Roadmap for Algorithms That Put People First

As lawsuits mount and legislation proliferates aimed at stemming online harms, the battle over how algorithmic recommender systems should be designed is heating up. Yet common policy solutions that focus on mandating chronological feeds or limiting personalization fail to address the core issue: how to design recommender systems that align with users’ genuine long-term interests rather than exploiting their short-term impulses.

This commentary was originally published in Competition Policy International’s TechREG Chronicle, April 2025 Edition.

When teens lose sleep scrolling through endless feeds of content, and the comment sections on social media express ever more outrage, the invisible design of algorithmic recommender systems is at work. As lawsuits mount and legislation proliferates aimed at stemming harms, the battle over how these systems should be designed is heating up. Yet common policy solutions that focus on mandating chronological feeds or limiting personalization fail to address the core issue: how to design recommender systems that align with users’ genuine long-term interests rather than exploiting their short-term impulses. 

A new report, Better Feeds: Algorithms That Put People First, authored by a distinguished group of experts convened by the Knight-Georgetown Institute (“KGI”) explores the research behind recommendation algorithms and proposes a more nuanced suite of guidelines that, if adopted, could transform the online experiences of youth and adult users alike.

In the United States, state and federal lawmakers have introduced more than 75 bills since 2023 targeting the design and operation of algorithms, more than a dozen of which have passed into law. Last year, both New York and California passed laws seeking to restrict children’s exposure to “addictive feeds.” This year, policymakers in Connecticut, Missouri, and Washington state have launched similar initiatives targeting algorithmic design.  At the same time, many state Attorneys General are suing tech platforms for allegedly designing defective and harmful algorithms, including one lawsuit brought by 42 states against Meta over its design choices. Efforts to address the design of algorithms will continue to expand in 2025 and beyond, highlighting the importance of adopting fit-for-purpose policy approaches.

Algorithmic curation has become ubiquitous across social media, search, streaming services, e-commerce, gaming, and more. A single platform can be thought of as a pile of various features including social media feeds, ad displays, comment sections, account recommendations, notifications, video and audio autoplay selections, and many other features. Many different recommender systems are deployed to power these features by surfacing the “items” (such as accounts or pieces of content) most likely to advance the platform’s goals. Some platforms optimize their recommender systems to maximize “engagement – the chances that users will click on, like, share, or stream an item. Because of this design, when recommender systems structure a social media feed, choose a  video to play, or select the next ad to show, the items deemed most likely to command attention from users are ranked on top. 

The problem with this approach is that possessing information that a user has commented on or shared a post is not sufficient to guarantee perfect knowledge of  their desires or preferences. Rather, these behaviors may stem from other causes, such as a rise in negative emotions or impulsive scrolling that the user later regrets. This gap is key to understanding why users identify harms resulting from what they see in their feeds. In the last two decades, psychological research has resoundingly demonstrated that people’s choices do not always align with their underlying preferences. One reason this occurs is that the “self” making decisions is not unitary: in some contexts people may act impulsively while in others they act with more thought and deliberation. 

In surfacing the items most likely to maximize engagement, these recommender systems can shape human behavior. The design of these systems encourages users to behave automatically and without regard to their deliberative preferences. Platforms are incentivized to design their products in this way because maximizing the chances that users will click, like, share, and view items aligns well with the business interests of companies monetized through advertising.

Adolescents may be more vulnerable to risks associated with exposure to algorithmically curated media than adults. While existing research indicates that the aggregate effect of social media on children and teens’ well-being is mixed, one area of clarity is that social media use displaces healthier activities like sleep. Because designing recommender systems to maximize engagement is a strategy by which companies extend their products’ use, engagement-based feeds likely contribute to this harm. Indeed, adolescents often report using social media late at night and losing track of time when doing so. This can undermine sleep by delaying and worsening its quality, increasing psychological stimulation shortly before bed, and raising exposure to light emissions that distort circadian rhythms. Insufficient sleep in children and teens is known to influence various other health outcomes, including the likelihood of learning problems, depression, and suicidal ideation. 

These and many other perceived harms have motivated policymakers to act, but when it comes to algorithms the approach has been binary. In both the United States and Europe, laws have sought to improve the design of recommender systems by incentivizing platforms to rank items chronologically or restricting their ability to customize feeds for each individual. These efforts target recommender system design by placing crucial limits on how they operate. Understanding how recommender systems operate is key to understanding why these two regulatory approaches are not well-suited to addressing algorithmic harms.

Typically, recommender systems are designed to interpret extensive signals of user behavior and use this information to predict which items from the universe of those available are most likely to induce engagement. Each item is then assigned a score based on the aggregate likelihood of causing various forms of engagement (e.g., clicks or reshares of a post) and ranked according to these scores. This ranking is then adjusted based on other relevant factors such as content variety. After this step, a user’s social media feed has been curated and the items a user sees at the top are those most likely to provoke engagement.

The two common strategies employed by policymakers – limiting personalization and mandating chronological feeds –  substantially alter this process. It is easy to understand the appeal of mandating chronological feeds and limiting personalization: both are simple to understand and commonly deployed in many digital media (such as messaging and email). 

But research cautions against their effectiveness at mitigating harms. Implementing chronological feeds can shift the mix of recommended items in unexpected ways: these feeds may increase users’ exposure to abuse, amplify the prevalence of political and untrustworthy items, and create a recency bias that rewards “spammy” posting behavior. Along the same vein, limiting personalization can be counterproductive because tailored content recommendations tend to enhance user satisfaction and lower barriers to finding high-quality items. Personalization, when carefully crafted, can be used to curate feeds for users in ways that further values other than engagement.

The alternative approach described in Better Feeds promotes recommender system designs that support long-term user value. This idea encapsulates the objective of orienting  recommender system design around producing  outcomes that align with the deliberate, forward-looking preferences and aspirations of users. For example, recommender systems can support long-term user value by asking users to explicitly state their preferences or relying on surveys or indicators of item quality selected by the users. When recommender systems fail to further long-term user value in these ways, meaningful numbers of users may regret their experiences on a platform or report a loss of self-control. These outcomes reflect the fact that optimizing for engagement does not typically promote long-term user value. Design approaches that focus on long-term user value can address harms related to recommender systems while preserving the benefits that thoughtful use of engagement data and personalization can offer. 

The Better Feeds report establishes guidelines for achieving this balance, comprising three components: design transparency, user choices and defaults, and assessments of long-term impact. While there is no guarantee that the constitutionality of legislative or regulatory efforts based on these guidelines would be upheld, the guidelines attend to concerns about the potential for regulation to implicate speech rights under the First Amendment and platform liability immunity under Section 230. Developing legislative text to support any of the guidelines would require nuance based on evolving case law.

Design transparency: Certain platforms currently disclose some information about the design of their recommender systems due to requirements under Article 27 of the European Union’s Digital Services Act (DSA), which mandates that large platforms share the “main parameters” used in their recommender systems. In practice, these disclosures merely provide limited information about which input data matters most.  No disclosures under Article 27 describe all relevant data sources and how the success of each recommender system is evaluated. 

Real accountability requires that public disclosures go much further. They should include information about all input data sources and the weight each one is given in the algorithm design, how platforms measure long-term user value, and the metrics used to assess the product teams responsible for the recommender systems. Not only would these disclosures allow outside experts to understand and compare different systems, they would motivate platforms to optimize in ways that demonstrate their attentiveness to long-term user value and satisfaction. This design transparency would help better align the incentives of platforms with long-term user value.

User choices and defaults: If implemented effectively, user controls could be powerful tools for users to ensure outcomes that align with their long-term preferences. To do this, platforms must enable users to choose between multiple different designs for recommender systems, with at least one option optimized for long-term user value, and to actually honor user preferences about what kinds of items should be in their feeds (e.g., which topics should be excluded from the feed). For minors, there may be legitimate concerns that offering a choice of recommender systems does not go far enough to protect these users given the stage of their development, so default recommender systems must be designed to support long-term user value. Given that user controls are notoriously difficult to use, providing user controls in a robust and accessible manner and healthier defaults for minors are the minimal steps needed to start exposing users to feeds centered on their own aspirations and preferences.y

Assessments of long-term impact: Platforms can only deliver long-term value to users if they continuously test the impact of algorithmic changes over time. Platforms can conduct these assessments by running so-called “holdout” experiments that exempt a group of users from design changes for 12 months or more.  That is, each year platforms would temporarily freeze the experience of a cohort of users and report metrics related to long-term value, enabling comparison between them and the general population of users. The aggregated results of these experiments should be subject to independent audits and published to maximize  accountability. 

These long-term impact assessments would incentivize platforms to show that users who received product updates throughout the year had better experience than those who did not. This creates a natural check against short-term thinking: if product changes boost immediate engagement but ultimately lead to negative outcomes, these problems will become clear when comparing each group. Publicly reported metrics about long-term user value would enable outside observers to check whether product updates ensure users are more satisfied and likely to stay on the platform, adding another layer of transparency with impact.

The U.S. is far from the only jurisdiction examining approaches to mitigate concerns about algorithmic feeds. In the E.U., for example, the DSA establishes transparency, accountability, and user control obligations related to recommender systems. Because of differences in legal frameworks and jurisprudence between the U.S. and other parts of the world, additional measures to promote better recommender system designs that go beyond those described above may be feasible in other jurisdictions. The Better Feeds guidelines include additional proposals that could further incentivize more user-centric designs:

  • Public content transparency: Platforms must continuously publish samples of the public content that is the most highly disseminated and highly engaged with by users and representative of a typical user experience.
  • Better user defaults: By default, platforms must optimize all users’ recommender systems to support long-term user value. 
  • Metrics and measurement of harm: Platforms must measure the aggregate harms to at-risk populations that result from recommender systems and publicly disclose the results of those measurements.

Recommender systems play an integral role in shaping online experiences, yet their design and implementation often prioritize short-term engagement over long-term user satisfaction and societal well-being. Better recommender systems are possible. To make this happen, policymakers should focus on incentivizing designs that support long-term value and high-quality user experiences. The Better Feeds guidelines serve as a roadmap for anyone interested in promoting algorithmic systems that put users’ long-term interests front and center.

Close