What would it take to automatically create a brief documentary about someone’s life? For this project, we’ll try to do just that. Given a biographical document (for example, a person’s Wikipedia page), can we extract key facts of their life, search the internet for still images that might illustrate those facts, and compose a brief video clip that puts together text (converted to speech) and images? We’ll do as much of it as we can in ten weeks, and if work is promising, continue in a future Knight Lab Studio session.
Professor of Electrical Engineering and Computer Science
Prior to joining the faculty at Northwestern, Kris founded the University of Chicago’s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. Kris currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). He received his PhD from Yale.
How can we prioritize facts extracted from a biographical document?
How can we use those facts to locate images that serve as illustration?
What’s the best way to convert the facts to spoken narration?
How can we add visual interest to a generated video, for example, with pan-and-zoom effects?
Weeks 1-2: Identify a handful of test case biographies; as humans, perform the tasks to be automated to better understand the details; begin researching software libraries that can help.
Weeks 2-5: Begin building out system components. Test, iterate.
Weeks 6-10: Integrate components into a generated video clip. More testing, more iterating.
At the end of this quarter, one or more subsystems of the ultimate project will be prototyped and put to the test. If enough progress is made on various subsystems, an integrated process will be developed; otherwise, students will document the work done for transfer to a future Knight Lab Studio session.