/
Introducing Model Legislation for Better Algorithmic Feeds
Model legislation published by the Knight-Georgetown Institute provides a pathway for lawmakers who want to encourage better algorithmic feeds that put users’ interests front and center.
Algorithms determine what we read, watch, and encounter online, and, increasingly, they also influence our offline lives. Yet algorithms are often built to maximize short-term engagement and capture attention, rather than to deliver long-term value to users. KGI’s new commentary takes a deep dive into the evolving landscape of recommender system design, highlighting six innovative trends that show it is possible to design better feeds that put people first.
Digital platforms and their recommendation algorithms face increasing public and legislative scrutiny. Designs that prioritize maximizing user engagement have been linked to a range of harms, from promoting suicidal ideation and disordered eating to encouraging extended product usage and exacerbating online extremism with offline consequences. In March 2025, the Knight-Georgetown Institute (KGI) published Better Feeds: Algorithms That Put People First, a roadmap outlining how recommender systems can be better designed to prioritize users’ interests. The report urges policymakers and product designers to ensure transparency in algorithmic design, provide users with meaningful choices and protective defaults, and require publicly audited assessments of long-term algorithmic impacts.
Platform designers are also advocating for better feeds that prioritize users’ interests. Jack Conte, CEO of Patreon, a platform built for creators, recently asserted that “it is possible for algorithms to serve people instead of people serving algorithms” and called for a move away from attention-based algorithms towards ones that foster genuine human connection. Since the publication of Better Feeds, KGI has engaged with platform designers and product managers, while also surveying and exploring innovative approaches to algorithmic feed design. This commentary showcases how Better Feeds principles are coming to life in practice and distills key insights from alternative algorithms emerging across the marketplace. Our findings fall into four categories: algorithmic design, user choices and controls, business models, and experimentation.
Algorithmic design for recommender systems refers to the process of creating the rules and models that recommender systems use to rank and display content to users. It determines what users see and in which order on social media feeds, video platforms, and news aggregators.
Key Insight #1: In many cases, employee performance metrics and incentives are strongly tied to engagement metrics, a dominant approach guiding platform design. This practice further pushes platform design to optimize based on user engagement rather than metrics that may indicate long-term user value.
A commonly suggested alternative to engagement-based ranking – non-personalized chronological feeds where content appears in the order it is posted – can have negative impacts on user experience.1 This is not the only alternative. The Better Feeds report explored alternative approaches, including quality and bridging. Quality-based feeds rank content by specified quality standards (like avoiding toxic language), while bridging-based feeds are designed to “bridge” social divides by fostering mutual understanding and trust. Some examples of platforms that make use of these alternate approaches:
Key Insight #2: The use of surveys as signals for content recommendation is far from pervasive. Surveys require careful construction and deployment, which requires more effort than the collection of engagement data. The use of surveys raises concerns about achieving statistical significance, dealing with “noisy” or inconsistent responses, and the risk of annoying users with frequent interruptions. Due to these challenges, surveys may be deployed to gather general sentiment but are often not used to directly optimize feeds.
There is also a tension between broad, Net Promoter Score (NPS)-style surveys (“how likely are you to recommend this product to a friend?”) and specific, in-feed questions about content, and open questions of what can be meaningfully learned and applied from each. Often, platforms use more easily gathered engagement data as a proxy metric for harder-to-measure concepts like user satisfaction, though the correlation is not always direct.
However, some platforms offer promising models for integrating user feedback as signals for personalization algorithms:
User choices and controls refer to the tools and options that platforms provide for individuals to customize and manage their own content feeds and experience.
Key Insight #3: While the tools available to users for controlling and customizing their feeds are currently limited and underutilized, emerging signals in the market suggest that at least some users are seeking more control. This gap between user desires and current offerings indicates an opportunity: meaningful, accessible controls that provide substantive choices could meet this latent demand and increase adoption among users wanting greater agency over their digital environment.
As the Better Feeds report highlights, research offers guidance for successful user choices: Effective controls must be transparent and easily discoverable, catering to different user needs by offering both granular adjustments (over a single piece of content) and broader, topic-level choices (like muting all weight-loss content). These options should be presented at relevant moments, such as during signup or after a major product update, rather than being buried deep within settings, a practice that can increase their adoption and positive reception. Innovation around feed customization shows that better user experiences are possible:
Key Insight #4: Innovation in feed design appears most promising on platforms with business models that differ from traditional pay-per-impression or pay-per-click advertising. On ad-based platforms, the use of Key Performance Indicators (KPIs) often reinforces optimization for engagement. These are specific, measurable metrics that companies use to track progress and evaluate employee success. When employee KPIs, and the promotions or bonuses tied to them, are directly linked to growth in user engagement, the business model naturally incentivizes algorithms that prioritize capturing short-term attention, as this is what both the company and its employees are rewarded for. Conversely, platforms with alternative business models often have different incentives, where a different concept of “stickiness” or product value may be inspiring more focus on long-term value or satisfaction.
To predict and understand the impact of changes before making business decisions, platforms rely on experimentation. A primary method is A/B testing, where different user groups are shown distinct versions of a product to compare performance—for example, testing a blue button against a green one. A specific experimental technique involves a holdout group, where a set of users (“holdouts”) is intentionally excluded from new features to serve as a pure baseline. Long-term holdouts are experiments where the holdout user group does not receive changes for twelve months or more, a necessary practice for understanding the changes’ true effects on long-term value.
The effects of platform changes on long-term satisfaction can often look like failures in the short run. A feature that reduces annoying notifications might cause daily active minutes to dip initially, leading a product team to discard it. However, over months, that same change could lead to higher overall user satisfaction and better retention. Long-term holdouts provide a stable baseline to measure these cumulative effects, preventing platforms from misinterpreting a short-term dip in engagement as a long-term loss of value.
Key Insight #5: Long-term holdouts are much less common than short-term holdouts. A single platform may run thousands of short-term A/B tests in a year, but most do not run holdouts that are longer than a business quarter, let alone a year.
Key Insight #6: Large holdouts can be challenging for small platforms, and there is a limit on the number of meaningful holdout experiments that can coexist on any platform at the same time. For a smaller service, dedicating a statistically significant portion of its limited user base to a holdout represents a high opportunity cost, slowing down its ability to test new features needed for growth.
Industry research reveals a sophisticated conversation happening around the challenge of measuring true, long-term product value. A core problem is that short-term engagement metrics, which are easy to measure in standard A/B tests, are often poor indicators of long-term user satisfaction and retention. Undesirable features can sometimes increase short-term engagement even as they degrade the user experience.
To address this, platforms are developing surrogate or proxy metrics. These are short-term, measurable indicators chosen for their strong correlation with a desired long-term outcome. For example, Pinterest uses in-app user surveys to directly measure qualities like “inspiration” and “personal relevance,” creating a proxy for long-term value. However, proxy metrics can prove to be complex, requiring robust statistical modeling to validate the proxy and accounting for biases, such as the tendency for the measured positive effects of “winning” experiments to be overestimated due to error margins.2 Methods like meta-analysis of many experiments and experiment splitting can be used for more accurate measures of impact. A meta-analysis statistically combines the results from numerous individual experiments to estimate an overall effect with greater confidence. Experiment splitting helps correct for bias by using one portion of the experiment’s data to select the “winning” version and a separate, second portion to measure its true performance.
Ultimately, even with better statistical tools, defining and measuring long-term value remains a challenge. It requires a commitment to running long-term holdouts to validate that decisions based on proxy metrics are actually delivering the intended benefit. It also requires a willingness to push back against the assumption that engagement (short-term or long-term) is a suitable proxy for user value.
While engagement-driven feeds and limited user controls remain a dominant algorithm model across major online platforms, this is not the only path forward. Emerging approaches across algorithmic design, user choices and controls, business models, and experimentation demonstrate that it is possible to build algorithms that prioritize long-term user value over short-term engagement and attention maximization.
From quality- and bridging-based feeds like Sill and Dailymotion, to custom feed creation tools from Graze Social and SkyFeed, subscription-aligned incentives on LinkedIn and Hinge, and user surveys on Pinterest measuring “inspiration” and “personal relevance” as proxies for long-term value, these innovations demonstrate that healthier, people-centered feeds are possible and that platforms can design systems that put users first.