I am an associate professor at the Department of Computer Science and Data Science at the University of Chicago.
I direct the Chicago Human+AI lab (CHAI) ().
I am also a visiting scientist at Abridge.
You can learn more about my life trajectory and find my official biography here.
My research brings together social sciences and machine learning to develop the best AI for humans. The applications that I am most interested in are 1) scientific discoveries, 2) healthcare, and 3) governance and democratic processes (ordered by random coin flips).
The central question of my current interest lies in how we can build an effective communication protocol between humans and AI. Here are some example problems that I think will contribute to this question:
- Specification: How can we help AI understand human goals and help humans specify their goals? Prompting and reinforcement learning from human feedback are the main paradigms right now, but what about goals that cannot be easily described or elicited through human preferences? Examples include data-driven hypothesis generation and decision-focused summarization.
- Complementary AI: How can we develop AI that accounts for human intuitions/biases and complement human intelligence/enable humans to supervise powerful AI that outperforms humans ("scalable oversight")? This requires us to model human intuitions/biases and address them constructively. We have done some prior work on the important role of human similarity judgments in case-based explanations and generally machine explanations. But we will need fundamental breakthroughs in behavioral sciences, which in turn inform AI modeling/inference.
- Interpretability: How can we make sense of (powerful) AI that is much better than humans? I am most interested in interpretability for expanding human knowledge (e.g., in scientific discoveries) and for improving the controllability of AI. It is important to explore opportunities in the absence of identifiability.
In general, goals matter more to me than the specific problem or method. For instance, while hallucination is not among the above problems, I spend a lot of time thinking about hallucination at Abridge, as non-factual statements are very problematic for generating clinical notes in healthcare.
If you are interested in seeing how my research description has evolved, you can check this page.
Selected recent work:
- Haokun Liu*, Yangqiaoyu Zhou*, Mingxuan Li*, Chenfei Yuan, and Chenhao Tan. Literature Meets Data: A Synergistic Approach to Hypothesis Generation. [Website]
- Yangqiaoyu Zhou, Haokun Liu, Tejes Srivastava, Hongyuan Mei, and Chenhao Tan. Hypothesis Generation with Large Language Models. NLP4Science workshop at EMNLP 2024.
- Chacha Chen, Shi Feng, Amit Sharma, and Chenhao Tan. Machine Explanations and Human Understanding. Accepted at Transactions on Machine Learning Research (TMLR); presented at FAccT 2023; Best paper award at the ICML 2022 Workshop on Human-Machine Collaboration and Teaming (workshop version).
- Han Liu, Yizhou Tian, Chacha Chen, Shi Feng, Yuxin Chen, and Chenhao Tan. Learning Human-Compatible Representations for Case-Based Decision Support. In Proceedings of ICLR 2023.
Openings:
As of fall 2024, you can apply to work with me through
I also have a postdoc opening on human-centered AI that can start any time. Please email me your CV and names of references. I am particularly open to students who have background in medicine and policy (e.g., MD/PhD and PhD students in economics).
I am always looking for motivated masters and undergraduate students who are interested in human-centered AI, NLP, and computational social science.
Please read this FAQ before contacting me.
Demos & Packages:
If you want to learn more about my research, check my papers. And kudos to all my collaborators!