Over the past few years, we have seen increased attention to the problem of bias. AI systems built on a substrate of machine learning are increasingly being seen as biased. Automated information delivery systems (e.g., Facebook, twitter) are using algorithms that, by their nature, are biased in the type of news they recommend. And we now have an entire class of language models constructed using millions of documents that are demonstrably biased. One could argue that bias is impossible to avoid but this project is an attempt to do so.
Over the course of this quarter, we want to define/design/develop a system that takes a news story as input and determines if it is biased. To do so, we will need to determine what we even mean when we say something is biased and then begin to consider how to recognize it.