Commentary /

Bringing Better Feeds to Life

Algorithms determine what we read, watch, and encounter online, and,  increasingly, they also influence our offline lives. Yet algorithms are often built to maximize short-term engagement and capture attention, rather than to deliver long-term value to users. KGI’s new commentary takes a deep dive into the evolving landscape of recommender system design, highlighting six innovative trends that show it is possible to design better feeds that put people first.

Digital platforms and their recommendation algorithms face increasing public and legislative scrutiny. Designs that prioritize maximizing user engagement have been linked to a range of harms, from promoting suicidal ideation and disordered eating to encouraging extended product usage and exacerbating online extremism with offline consequences. In March 2025, the Knight-Georgetown Institute (KGI) published Better Feeds: Algorithms That Put People First, a roadmap outlining how recommender systems can be better designed to prioritize users’ interests. The report urges policymakers and product designers to ensure transparency in algorithmic design, provide users with meaningful choices and protective defaults, and require publicly audited assessments of long-term algorithmic impacts.

Platform designers are also advocating for better feeds that prioritize users’ interests. Jack Conte, CEO of Patreon, a platform built for creators, recently asserted that “it is possible for algorithms to serve people instead of people serving algorithms” and called for a move away from attention-based algorithms towards ones that foster genuine human connection. Since the publication of Better Feeds, KGI has engaged with platform designers and product managers, while also surveying and exploring innovative approaches to algorithmic feed design. This commentary showcases how Better Feeds principles are coming to life in practice and distills key insights from alternative algorithms emerging across the marketplace. Our findings fall into four categories: algorithmic design, user choices and controls, business models, and experimentation. 

Algorithmic Design

Algorithmic design for recommender systems refers to the process of creating the rules and models that recommender systems use to rank and display content to users. It determines what users see and in which order on social media feeds, video platforms, and news aggregators.

Key Insight #1: In many cases, employee performance metrics and incentives are strongly tied to engagement metrics, a dominant approach guiding platform design. This practice further pushes platform design to optimize based on user engagement rather than metrics that may indicate long-term user value. 

A commonly suggested alternative to engagement-based ranking – non-personalized chronological feeds where content appears in the order it is posted – can have negative impacts on user experience.1 This is not the only alternative. The Better Feeds report explored alternative approaches, including quality and bridging. Quality-based feeds rank content by specified quality standards (like avoiding toxic language), while bridging-based feeds are designed to “bridge” social divides by fostering mutual understanding and trust. Some examples of platforms that make use of these alternate approaches: 

  • Quality-Based Algorithm: Sill is a news aggregator that displays trending stories shared by other users in the users’ network on Bluesky or Mastodon. By building on a user’s existing, self-selected network, it offers a simple, rule-based system to show users the links they may most want to see. Effectively, this uses the number of times a link is shared by the user’s social network as a proxy for quality. 
  • Bridging-Based Algorithms: Dailymotion is a video-based social platform which in 2023 announced a new home feed “designed to allow everyone to debate and confront their opinions.” This included a new “opinion-based” recommender which they used to re-rank content to include videos on similar topics as the user enjoyed but a strong or different opinion. Sparkable, a non-profit social media platform designed to foster empathy and joy while avoiding polarization and hate, uses a bridging-based ranking system which gives visibility to posts that are highly rated by users who have previously disagreed on other topics. Both platforms attempt to unite diverse perspectives rather than optimizing for engagement.

01/03

Key Insight #2: The use of surveys as signals for content recommendation is far from pervasive. Surveys require careful construction and deployment, which requires more effort than the collection of engagement data. The use of surveys raises concerns about achieving statistical significance, dealing with “noisy” or inconsistent responses, and the risk of annoying users with frequent interruptions. Due to these challenges, surveys may be deployed to gather general sentiment but are often not used to directly optimize feeds. 

There is also a tension between broad, Net Promoter Score (NPS)-style surveys (“how likely are you to recommend this product to a friend?”) and specific, in-feed questions about content, and open questions of what can be meaningfully learned and applied from each. Often, platforms use more easily gathered engagement data as a proxy metric for harder-to-measure concepts like user satisfaction, though the correlation is not always direct.  

However, some platforms offer promising models for integrating user feedback as signals for personalization algorithms:

  • Meaningful Surveys: The design of dating apps like Hinge, for instance, relies on direct signals as users explicitly indicate interest (or lack thereof) with every swipe. On top of this, Hinge offers a secondary feedback layer with its “We Met” feature, which surveys users about the quality of their dates offline. In both cases, the collection of explicit user signals and feedback is intuitive and user-friendly, showing that surveys need not be intrusive or annoying. Pinterest uses surveys to pursue “healthier” personalization that optimizes qualities like “inspiration” and “relevance” rather than just engagement. By using this direct feedback to evaluate new models before launch and diagnose issues, Pinterest shows how survey data can be a primary tool for algorithm refinement.

User Choices and Controls

User choices and controls refer to the tools and options that platforms provide for individuals to customize and manage their own content feeds and experience. 

Key Insight #3: While the tools available to users for controlling and customizing their feeds are currently limited and underutilized, emerging signals in the market suggest that at least some users are seeking more control. This gap between user desires and current offerings indicates an opportunity: meaningful, accessible controls that provide substantive choices could meet this latent demand and increase adoption among users wanting greater agency over their digital environment.

As the Better Feeds report highlights, research offers guidance for successful user choices: Effective controls must be transparent and easily discoverable, catering to different user needs by offering both granular adjustments (over a single piece of content) and broader, topic-level choices (like muting all weight-loss content). These options should be presented at relevant moments, such as during signup or after a major product update, rather than being buried deep within settings, a practice that can increase their adoption and positive reception. Innovation around feed customization shows that better user experiences are possible:

  • Custom Feed Creation: Several platforms are being built with the main purpose of giving users meaningful choice and control of their feeds. Graze Social is a tool for creating and sharing customizable Bluesky feeds with analytics and multiple sorting options. It includes an open-source algorithm engine and a public “marketplace” for feed definitions. Flipboard’s Surf is a beta-mode cross-network feed builder/browser supporting ActivityPub, AT Protocol, and RSS. It allows users to mix content sources into interest-based, multi-modal feeds. SkyFeed is a Bluesky wrapper with a tool to build custom feeds. The feed builder has six ordering options and supports regex filtering. By providing users with tools to become algorithm designers, these platforms show how users can be given meaningful control over their information consumption. 

01/02

Business Models

Key Insight #4Innovation in feed design appears most promising on platforms with business models that differ from traditional pay-per-impression or pay-per-click advertising. On ad-based platforms, the use of Key Performance Indicators (KPIs) often reinforces optimization for engagement. These are specific, measurable metrics that companies use to track progress and evaluate employee success. When employee KPIs, and the promotions or bonuses tied to them, are directly linked to growth in user engagement,  the business model naturally incentivizes algorithms that prioritize capturing short-term attention, as this is what both the company and its employees are rewarded for. Conversely, platforms with alternative business models often have different incentives, where a different concept of “stickiness” or product value may be inspiring more focus on long-term value or satisfaction. 

  • Subscription Models: Platforms like LinkedIn and Hinge engage in advertising but also rely on some users paying a fee. For them, success is not just about engagement and attention, but about providing enough sustained value that a user is willing to continue paying for the service. The long-term user value of these platforms is easy to identify: for LinkedIn, it’s career advancement, and for Hinge, the “dating app designed to be deleted,” it’s finding a lasting relationship. Both represent clear, tangible goals that users value enough to pay for, aligning the platform’s financial success with the user’s real-world success. 
  • Nontraditional Advertising: Other platforms and tools are integrating advertising in nontraditional ways. The platform WeAre8, a b-corporation, short-form video and text platform built around social and environmental good, shares a percentage of ad revenue with users and with charities. This explicitly recognizes the economic value that users’ attention brings to the platform. Graze Social has created a marketplace that feed curators can use to sell and include sponsored content on their feeds. If a curated feed loses the trust and interest of its audience by including too many or irrelevant ads, followers can easily stop following the specific feed without losing access to the platform the feeds are displayed on. 

Experimentation

To predict and understand the impact of changes before making business decisions, platforms rely on experimentation. A primary method is A/B testing, where different user groups are shown distinct versions of a product to compare performance—for example, testing a blue button against a green one. A specific experimental technique involves a holdout group, where a set of users (“holdouts”) is intentionally excluded from new features to serve as a pure baseline. Long-term holdouts are experiments where the holdout user group does not receive changes for twelve months or more, a necessary practice for understanding the changes’ true effects on long-term value. 

The effects of platform changes on long-term satisfaction can often look like failures in the short run. A feature that reduces annoying notifications might cause daily active minutes to dip initially, leading a product team to discard it. However, over months, that same change could lead to higher overall user satisfaction and better retention. Long-term holdouts provide a stable baseline to measure these cumulative effects, preventing platforms from misinterpreting a short-term dip in engagement as a long-term loss of value.

Key Insight #5: Long-term holdouts are much less common than short-term holdouts. A single platform may run thousands of short-term A/B tests in a year, but most do not run holdouts that are longer than a business quarter, let alone a year.

Key Insight #6: Large holdouts can be challenging for small platforms, and there is a limit on the number of meaningful holdout experiments that can coexist on any platform at the same time. For a smaller service, dedicating a statistically significant portion of its limited user base to a holdout represents a high opportunity cost, slowing down its ability to test new features needed for growth.

Industry research reveals a sophisticated conversation happening around the challenge of measuring true, long-term product value. A core problem is that short-term engagement metrics, which are easy to measure in standard A/B tests, are often poor indicators of long-term user satisfaction and retention. Undesirable features can sometimes increase short-term engagement even as they degrade the user experience. 

To address this, platforms are developing surrogate or proxy metrics. These are short-term, measurable indicators chosen for their strong correlation with a desired long-term outcome. For example, Pinterest uses in-app user surveys to directly measure qualities like “inspiration” and “personal relevance,” creating a proxy for long-term value. However, proxy metrics can prove to be complex, requiring robust statistical modeling to validate the proxy and accounting for biases, such as the tendency for the measured positive effects of “winning” experiments to be overestimated due to error margins.2 Methods like meta-analysis of many experiments and experiment splitting can be used for more accurate measures of impact. A meta-analysis statistically combines the results from numerous individual experiments to estimate an overall effect with greater confidence. Experiment splitting helps correct for bias by using one portion of the experiment’s data to select the “winning” version and a separate, second portion to measure its true performance.

Ultimately, even with better statistical tools, defining and measuring long-term value remains a challenge. It requires a commitment to running long-term holdouts to validate that decisions based on proxy metrics are actually delivering the intended benefit. It also requires a willingness to push back against the assumption that engagement (short-term or long-term) is a suitable proxy for user value. 

Conclusion

While engagement-driven feeds and limited user controls remain a dominant algorithm model across major online platforms, this is not the only path forward. Emerging approaches across algorithmic design, user choices and controls, business models, and experimentation demonstrate that it is possible to build algorithms that prioritize long-term user value over short-term engagement and attention maximization. 

From quality- and bridging-based feeds like Sill and Dailymotion, to custom feed creation tools from Graze Social and SkyFeed, subscription-aligned incentives on LinkedIn and Hinge, and user surveys on Pinterest measuring “inspiration” and “personal relevance” as proxies for long-term value, these innovations demonstrate that healthier, people-centered feeds are possible and that platforms can design systems that put users first.

Citations
  1. Research has shown that chronological feeds may decrease engagement and cause users to switch back to algorithm-based feeds, increase exposure to harmful content, reduce content from social networks, encourage “spammy” posting, and don’t work for all platforms (i.e. Netflix, Spotify). See Moehring, Alex, Alissa Cooper, Arvind Narayanan, et al. “Better Feeds: Algorithms That Put People First.” Knight-Georgetown Institute, March 4, 2025. https://kgi.georgetown.edu/research-and-commentary/better-feeds/.
  2. This is known as the “winner’s curse.” By selecting the top-performing experiment from a large batch, you are likely selecting the one whose random statistical variation happened to be the most favorable, thus inflating its measured effect. See Milan Shen. “Selection Bias in Online Experimentation: Thinking Through a Method for the Winner’s Curse in A/B Testing.” The Airbnb Tech Blog, May 3, 2018. https://medium.com/airbnb-engineering/selection-bias-in-online-experimentation-c3d67795cceb.

Close