NCSA staff who would like to submit an item for the calendar can email newsdesk@ncsa.illinois.edu.
Abstract: In this talk, I will describe prior work and current research in human-AI collaboration, specifically focusing on the role theoretical modeling can play in better understanding when socially desirable outcomes are feasible. In particular, I will discuss the influence that different factors can have, such as human cognitive biases, the way that the human and AI interact, and different levels of accuracy.
First, I will present a stylized model of a strategic decision-maker using an algorithmic tool: a firm using an algorithmic tool to select candidates to hire. Here, we consider a setting where the human decision-maker has access to side information, specifically the employment status of candidates, which gives a noisy signal both candidate quality and candidate availability. The human decision-maker then combines that information with information given by the algorithm’s rankings over candidates in order to decide which candidate to ultimately select. In this setting, we show that counter-intuitive results can occur, such as increased accuracy of the AI tool leading to worse social outcomes, including worse outcomes for the company creating the AI tool itself.
Finally, I will discuss further topics in human-AI collaboration, including settings where the human’s preferences may be misaligned with the AI tool, or there may be misalignment within a population of humans.
This talk is based on work with Bhaskar Ray Chaudhury, Nicole Immorlica, Brendan Lucier, Jiaxin Song, Parnian Shahkar.
Bio: Kate Donahue is a METEOR postdoc at MIT, working with Manish Raghavan, and starting in summer ’26 will be an assistant professor of computer science at University of Illinois Urbana-Champaign (UIUC). She works on algorithmic problems relating to the societal impact of AI such as fairness, human/AI collaboration and game-theoretic models of federated/collaborative learning.