As newsrooms shift from "article factories" to "knowledge engines," understanding how different communities perceive a story is crucial for maintaining trust. Mirror is an editorial auditing tool that uses "digital twins"—AI-generated representations of real audience segments—to simulate how diverse groups might respond to journalism before it is published.
Building on concepts from Kaveh Waddell's Verso and recent research from Northwestern, Stanford, and Google DeepMind, this project explores the potential of agent-based modeling in news. Studies have shown that generative agents built from qualitative interviews can replicate human responses with 85% accuracy. Mirror leverages this capability not to replace human engagement, but to scale it—allowing journalists to identify blind spots, tone issues, and potential biases when resources for real-world focus groups are limited.
Mirror: Synthetic Audience Auditing is accepting applications for Spring 2026. See project details here.
As newsrooms shift from "article factories" to "knowledge engines," understanding how different communities perceive a story is crucial for maintaining trust. Mirror is an editorial auditing tool that uses "digital twins"—AI-generated representations of real audience segments—to simulate how diverse groups might respond to journalism before it is published.
This studio project will produce a first-of-its-kind Synthetic Audience Auditing Tool. Students will prove whether agent-based modeling can effectively scale "listening" in local newsrooms.