profile photo

Ziwei Gu




Updates

05/2021 - I graduated from Cornell!

01/2021 - My first first-author paper on user sensemaking was accepted at WWW'21.

01/2021 - Our paper on mining interaction logs was accepted at CHI'21.

06/2020 - I started an internship at Lyft as a data scientist intern.

01/2020 - Our paper on Silva was accepted at CHI'20. Congratulations to my co-authors!

Hi, I'm Ziwei.

I am a first-year Ph.D. student in Computer Science at Harvard University, advised by Professor Elena Glassman. My research interests lie at the intersection of human-computer interaction and machine learning/ natural language processing.

Before coming to Harvard, I graduated with a bachelors in Mathematics and Computer Science (December 2020) and masters in Computer Science (May 2021) from Cornell University.

Publications
Tessera: Discretizing Data Analysis Workflows on a Task Level
Jing Nathan Yan, Ziwei Gu, Jeffrey M Rzeszotarski
CHI, 2021
May 8-13, 2021, Yokohama, Japan
paper / video / slides

Interaction logs can be extremely complex yet useful. Breaking down event logs into goal-directed segments can make it easier to understand user workflow.

Understanding User Sensemaking in Machine Learning Fairness Assessment Systems
Ziwei Gu, Jing Nathan Yan, Jeffrey M Rzeszotarski
WWW, 2021
April 19-23, 2021, Ljubljana, Slovenia
paper / video / slides

We ask a fundamental research question: How do core design elements of debiasing systems shape how people reason about biases? We present distinctive sensemaking patterns and surprising findings from think-aloud studies.

Silva: Interactively Assessing Machine Learning Fairness Using Causality
Jing Nathan Yan, Ziwei Gu, Hubert Lin, Jeffrey M Rzeszotarski
CHI, 2020
April 25-30, 2020, Honolulu, HI, USA
paper / short video / long video / slides

We present Silva, an interactive tool that utilizes a causal graph linked with quantitative metrics to help people find and reason about sources of biases in datasets and machine learning models.

Technical Reports
Neural Open Information Extraction with Transformers
Wes Gurnee, Ziwei Gu

paper / code / slides

We trained a deep learning model (Transformer) for open information extraction, modeled as a sequence to sequence transduction task. We showed that our model was competitive with state-of-the-art systems, but without the dependencies on other NLP tools.


Layout inspired by this template