Bedrock: Human Ground Truth for Generative AI

In an era of "agentic journalism," the role of the reporter is shifting from content creator to truth architect. As audiences increasingly consume news through AI agents—like ChatGPT, Apple Intelligence, or specialized wearables—the traditional article is becoming a secondary format. The primary value of journalism now lies in the verified human reporting that prevents these agents from hallucinating.

Bedrock is a studio project focused on creating a High-Fidelity Reporting Package (HFRP). This is a machine-readable "ground truth artifact" that allows a journalist to package field-verified data (claims, media, and context) into a structured format. Once the human "mines" the truth, AI agents can then reliably refine and distribute that truth into podcasts, vertical videos, or interactive alerts without losing the original reporting's nuance.

Bedrock: Human Ground Truth for Generative AI is accepting applications for Spring 2026. See project details here.

Faculty and Staff Leads

Zach Wise

Professor, Journalism

Emmy winning interactive producer & Associate Professor @NorthwesternU, @KnightLab. Formerly of The New York Times. Creator of TimelineJS & StoryMapJS

Project Details

2026 Spring
Bedrock: Human Ground Truth for Generative AI

Description

In an era of "agentic journalism," the role of the reporter is shifting from content creator to truth architect. Bedrock is a studio project focused on creating a High-Fidelity Reporting Package (HFRP)—a machine-readable "ground truth artifact" that allows a journalist to package field-verified data into a structured format for AI agents to reliably distribute.

Important Questions
  • How do we ensure that when an AI transforms a 'Bedrock' artifact into a 30-second reel, the nuance and accuracy of the original human reporting are preserved?
  • What specific data schemas (JSON, Knowledge Graphs) are most effective for grounding LLMs in 'ground truth' to prevent hallucinations during summary generation?
  • How can we design transparency interfaces that allow the end-user to 'click through' an AI-generated summary to view the original, cryptographically-signed human source material?
  • What does 'editing' look like when the output is multi-modal? How do editors audit the structured artifact before the AI agents begin the formatting process?
Outcome

This studio project will produce a first-of-its-kind Machine-Readable Journalism Standard. Students will develop a functional "Refinery Dashboard" that proves how human-collected data can ground Generative AI.

Apply to Project