Over the past few years, we have seen increased attention to the problem of bias. AI systems built on a substrate of machine learning are increasingly being seen as biased. Automated information delivery systems (e.g., Facebook, twitter) are using algorithms that, by their nature, are biased in the type of news they recommend. And we now have an entire class of language models constructed using millions of documents that are demonstrably biased.
One could argue that bias is impossible to avoid but this project is an attempt to do so.
Professor of Electrical Engineering and Computer Science
Prior to joining the faculty at Northwestern, Kris founded the University of Chicago’s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. Kris currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). He received his PhD from Yale.
Over the course of this quarter, we want to define/design/develop a system that takes a news story as input and determines if it is biased. To do so, we will need to determine what we even mean when we say something is biased and then begin to consider how to recognize it.