profile photo

Ziwei Gu




Updates

03/2024 - Our eye-tracking paper was accepted at CHI'24 Late Breaking Work.

01/2024 - Two papers conditionally accepted at CHI'24. Congratulations to my co-authors!

Hi, I'm Ziwei.

I am a second-year Ph.D. student in Computer Science at Harvard University, advised by Dr. Elena Glassman. Currently, my research focuses on augmenting human cognition and efficiency by leveraging large language models (LLMs) and interactive techniques, aiming to ensure that potential AI errors can be easily noticed, judged, and recovered from, a concept I formalize as AI-resiliency.

Before coming to Harvard, I graduated with a bachelors in Mathematics and Computer Science (December 2020) and masters in Computer Science (May 2021) from Cornell University.

Publications
Why Do Skimmers Perform Better with Grammar-Preserving Text Saliency Modulation (GP-TSM)? Evidence from an Eye Tracking Study
Ziwei Gu, Owen Raymond, Naser Al Madi, Elena L. Glassman
CHI 2024 Late Breaking Work
May 11-16, 2024, Honolulu, HI, USA
paper / video

How can we get a better understanding of the mechanism through which an LLM-based reading assistance tool (GP-TSM) supports reading? We conducted an eye-tracking user study with 24 participants, followed by an analysis of the unique gaze patterns associated with GP-TSM.

An AI-Resilient Text Rendering Technique for Reading and Skimming Documents
Ziwei Gu, Ian Arawjo, Kenneth Li, Jonathan K. Kummerfeld, Elena L. Glassman
CHI 2024
May 11-16, 2024, Honolulu, HI, USA
paper / video

We propose the idea of "AI-resilience" and an LLM-powered technique that supports reading through recursive summarization while allowing readers to easily notice and recover from LLM summaries they disagree with.

Supporting Sensemaking of Large Language Model Outputs at Scale
Katy Ilonka Gero, Chelse Swoopes, Ziwei Gu, Jonathan K. Kummerfeld, Elena L. Glassman
CHI 2024
May 11-16, 2024, Honolulu, HI, USA
paper

Large language models (LLMs) are capable of generating multiple responses to a single prompt, yet little effort has been expended to help people make use of this capability. In this paper, we explore how to present many LLM responses at once.

Tessera: Discretizing Data Analysis Workflows on a Task Level
Jing Nathan Yan, Ziwei Gu, Jeffrey M Rzeszotarski
CHI 2021
May 8-13, 2021, Yokohama, Japan
paper / video / slides

Interaction logs can be extremely complex yet useful. Breaking down event logs into goal-directed segments can make it easier to understand user workflow.

Understanding User Sensemaking in Machine Learning Fairness Assessment Systems
Ziwei Gu, Jing Nathan Yan, Jeffrey M Rzeszotarski
WWW 2021
April 19-23, 2021, Ljubljana, Slovenia
paper / video / slides

We ask a fundamental research question: How do core design elements of debiasing systems shape how people reason about biases? We present distinctive sensemaking patterns and surprising findings from think-aloud studies.

Silva: Interactively Assessing Machine Learning Fairness Using Causality
Jing Nathan Yan, Ziwei Gu, Hubert Lin, Jeffrey M Rzeszotarski
CHI 2020
April 25-30, 2020, Honolulu, HI, USA
paper / short video / long video / slides

We present Silva, an interactive tool that utilizes a causal graph linked with quantitative metrics to help people find and reason about sources of biases in datasets and machine learning models.

Technical Reports
Neural Open Information Extraction with Transformers
Wes Gurnee, Ziwei Gu

paper / code / slides

We trained a deep learning model (Transformer) for open information extraction, modeled as a sequence to sequence transduction task. We showed that our model was competitive with state-of-the-art systems, but without the dependencies on other NLP tools.


Layout inspired by this template