
Gagan Bansal
Using insights from both Artificial Intelligence and Human-Computer Interaction, I am passionate about enabling human-AI interactions that help augment human performance and values.
At MSR, I am part of the Human-AI eXperiences Team (HAX). Prior to joining MSR in 2022, I was a Ph.D. student at the University of Washington, Seattle where I was advised by Dan Weld and was a part of the UW Lab for Human-AI Interaction. There, I studied the important problems of helping users decide when to trust AI recommendations [1], preserving AI's trustworthiness [2], and training AI to optimize joint human-AI performance [3].
- Email: {firstname}{lastname}@microsoft.com
- Google Scholar
Select Publications
Click here for publications post 2021
-
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA
Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer ACL 2021 TLDR: First user study to show that explanations can lead to higher appropriate reliance on AI than simply communicating AI's calibrated confidence. However, the best explanation approach can change with the modality. -
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Gagan Bansal*, Tongshuang Wu*, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, Daniel S. Weld
CHI 2021
TLDR: Many prior works argue that explanations improve decision-making. But they all observed improvements only when the AI was significantly better than the people/human-AI team. Getting rid of people would have performed even better. XAI should instead focus on appropriate reliance and complementary performance. -
Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork
Gagan Bansal, Besmira Nushi, Ece Kamar, Eric Horvitz, Daniel S. Weld
AAAI 2021
TLDR: For a simplified human-AI team, we formally show that the most accurate AI may not be the optimal teammate-- there exists another lower accuracy predictor that leads to higher team performance (expected utility). -
Data Staining: A Method for Comparing Faithfulness of Explainers
Jacob Sippy, Gagan Bansal, Daniel S. Weld
ICML-WHI 2020
TLDR: A new method to create unit-tests for assessing faithfulness of post hoc explainer to black-box models. Applicable to multiple domains (text, images) and is model-agnostic. -
Updates in Human-AI Teams: Undertanding and Addressing the Performance/Compatibility Tradeoff
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Daniel S. Weld, Eric Horvitz
AAAI 2019 TLDR: In AI-assisted decision making, updates that increase AI's accuracy (e.g., from availability of more training data) can actually decrease human-AI team performance by introducing AI behavior that violates existing user expectations. -
Beyond Accuracy: The Role of Mental Models in Human-AI Teams
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter Lasecki, Daniel S. Weld, Eric Horvitz
HCOMP 2019
TLDR: In AI-assisted decision makign settings, complexity of an AI's error regions (parsimony and stochasticity) and task dimensionality affect user ability to create a mental of AI's competence. -
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld, Gagan Bansal
CACM 2018 TLDR: Argues that intellgibility is essential and highlights keys challenges and research directions that interdesciplinary research on AI and HCI, including supporting interactive explanations, drilldown, actionability, and control. -
A Coverage-Based Utility Model for Identifying Unknown Unknowns
Gagan Bansal, Daniel S. Weld.
AAAI 2018 TLDR: A new utility function for discovery of high-confidence AI errors that optimizes for both salience and diversity of errors.
Mentoring
At UW, I've had the opportunity to work with and advise many excellent undergraduate, Masters, and high-school students:
- Cindy Su (Co-advised with Ben Lee, Summer 2021-Present)
- Prithvi Tarale (Autumn 2020-Summer 2021)
- Joyce Zhou (Co-advised with Dan Weld, Autumn 2019-Summer 2020)
- Jacob Sippy (Co-advised with Dan Weld, Autumn 2018-Summer 2020)
- Lynsey Liu (Co-advised with Jonathan Bragg, Autumn 2017-2018)
- Ziyao Huang (Co-advised with Jonathan Bragg, Autumn 2017-2018)
- Diana Iftimie (Co-advised with Dan Weld, Winter 2017)