Mirror: Synthetic Audience Auditing

As newsrooms shift from "article factories" to "knowledge engines," understanding how different communities perceive a story is crucial for maintaining trust. Mirror is an editorial auditing tool that uses "digital twins"—AI-generated representations of real audience segments—to simulate how diverse groups might respond to journalism before it is published.

Building on concepts from Kaveh Waddell's Verso and recent research from Northwestern, Stanford, and Google DeepMind, this project explores the potential of agent-based modeling in news. Studies have shown that generative agents built from qualitative interviews can replicate human responses with 85% accuracy. Mirror leverages this capability not to replace human engagement, but to scale it—allowing journalists to identify blind spots, tone issues, and potential biases when resources for real-world focus groups are limited.

Mirror: Synthetic Audience Auditing is accepting applications for Spring 2026. See project details here.

Faculty and Staff Leads

Zach Wise

Professor, Journalism

Emmy winning interactive producer & Associate Professor @NorthwesternU, @KnightLab. Formerly of The New York Times. Creator of TimelineJS & StoryMapJS

Project Details

2026 Spring
Mirror: Synthetic Audience Auditing

Description

As newsrooms shift from "article factories" to "knowledge engines," understanding how different communities perceive a story is crucial for maintaining trust. Mirror is an editorial auditing tool that uses "digital twins"—AI-generated representations of real audience segments—to simulate how diverse groups might respond to journalism before it is published.

Important Questions
  • How do synthetic audience insights compare to real-world engagement metrics? Can we replicate the '85% accuracy' findings in a local news context?
  • How do we ensure that synthetic personas represent genuine community diversity rather than just echoing the biases inherent in their training data?
  • How can we design a 'dialogue' interface that encourages journalists to move beyond simple bias checks to deep contextual interrogation of their reporting?
  • How do we clearly label and disclose the use of 'synthetic' feedback to ensure it is used as a guardrail, not a replacement for human listening?
Outcome

This studio project will produce a first-of-its-kind Synthetic Audience Auditing Tool. Students will prove whether agent-based modeling can effectively scale "listening" in local newsrooms.

Apply to Project