I moved to the University of Chicago after 3.5 wonderful years at Boulder. Before Boulder, I was a postdoctoral researcher at University of Washington, working with Noah Smith for a year. Prior to that, I spent wonderful years at Ithaca working with Lillian Lee and many others, and obtained a PhD degree in Computer Science at Cornell University. Originally, I am from Jingdezhen, China and lived in Beijing for four years while studying Computer Science and Economics at Tsinghua University.

You are here probably because you are interested in learning about life trajectories. I have a paper on that subject, and the main takeaway is "People, unlike trees, thrive by relocation".

If you are looking for a formal bio, here is the most recent one:

Chenhao Tan is an associate professor of computer science and data science at the University of Chicago, and is also a visiting scientist at Abridge. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor's degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.


History of research description on the homepage:

01/01/2025 ---

My research brings together social sciences and machine learning to develop the best AI for humans. The applications that I am most interested in are 1) scientific discoveries, 2) healthcare, and 3) governance and democratic processes (ordered by random coin flips).

The central question of my current interest lies in how we can build an effective communication protocol between humans and AI. Here are some example problems that I think will contribute to this question:

  • Specification: How can we help AI understand human goals and help humans specify their goals? Prompting and reinforcement learning from human feedback are the main paradigms right now, but what about goals that cannot be easily described or elicited through human preferences? Examples include data-driven hypothesis generation and decision-focused summarization.
  • Complementary AI: How can we develop AI that accounts for human intuitions/biases and complement human intelligence? This requires us to model human intuitions/biases and address them constructively. We have done some prior work on the important role of human similarity judgments in case-based explanations and generally machine explanations. But we will need fundamental breakthroughs in behavioral sciences, which in turn inform AI modeling/inference.
  • Interpretability: How can we make sense of (powerful) AI that is much better than humans? I am most interested in interpretability for expanding human knowledge (e.g., in scientific discoveries) and for improving the controllability of AI. It is important to explore opportunities in the absence of identifiability.

In general, goals matter more to me than the specific problem or method. For instance, while hallucination is not among the above problems, I spend a lot of time thinking about hallucination at Abridge, as non-factual statements are very problematic for generating clinical notes in healthcare.

Unknown beginning of time --- 12/31/2024

My research brings together social sciences and machine learning to develop the best AI for humans. Specifically, my work aims to enable effective human-AI interaction by

  1. Understanding human decision making through language. We analyze large amounts of textual data to unfold the connection between language and decisions in two directions. In one direction, we leverage natural experiments to understand how language shapes human decisions (e.g., what makes effective persuasion and language of bargaining). In the other direction, we examine explanations of human decisions to identify their biases.

  2. Generating human-centered explanations. Our work shows that current methods of generating explanations for AI predictions fail to improve human-AI decision making. We develop a novel theoretical framework to show that the missing link is to model human interpretation of AI explanations. We thus build algorithms to align AI explanations with human intuitions and demonstrate substantial improvements in human performance.

  3. Developing novel paradigms of human-AI interaction. We explore additional possibilities that humans and AI can complement each other in three directions: 1) appropriate and effective delegation to AI; 2) decision-focused summarization, a novel formulation of the classic NLP task to identify the most relevant information to support decision making; and 3) few-shot learning from human explanations so that humans can effectively improve large language models (LLMs).