[about]
[papers]
[travels]
Chenglei Si
[email]
[scholar]
[twitter]
[github]
1st Year PhD Student at
Stanford NLP
About
At Stanford, I'm rotating with
Diyi Yang and
Michael Bernstein. Before coming to Stanford, I did my undergrad at the University of Maryland where I was advised by
Jordan Boyd-Graber, while also working closely with
Hal Daumé III,
He He,
Danqi Chen,
and
Sherry Wu.
In summer 2022, I did a research internship at Microsoft hosted by
Zhe Gan. Before that, I got into NLP research by working with
Min-Yen Kan and
Zhiyuan Liu.
Nowadays, I'm fascinated by
Human-Muppet Interaction.
And I'm particularly concerned about the following questions:
How can humans verify what Muppets said, especially in tasks where humans lack the domain expertise?
How can we enable human-Muppet collaboration to complete tasks that humans or Muppets cannot solve alone?
What is the long-term impact of humans relying on Muppets?
How can we measure and improve the safety of Muppets?
In pursuing these research questions, I tend to move away from existing benchmarks and put human needs at the
center of my research.
I also aim to craft an interdisciplinary research agenda that connects insights from HCI, NLP, ML, Psychology, and Linguistics.
Back in the old days, I also worked on Question Answering, Tokenization, and Prompting.
Recent Papers
-
Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, Jordan Boyd-Graber
preprint
[paper]
[tweet]
-
Mixture of Prompt Experts for Generalizable and Interpretable Question Answering
Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, Jordan Boyd-Graber
EMNLP 2023 Findings
[paper]
[code]
-
Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
Sander Schulhoff*, Jeremy Pinto*, Anaum Khan, Louis-François Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, Jordan Boyd-Graber
EMNLP 2023
[webpage]
-
Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations
Chenglei Si*, Dan Friedman*, Nitish Joshi, Shi Feng, Danqi Chen, He He
ACL 2023
[paper]
[code]
[tweet]
[OpenReview]
-
Prompting GPT-3 To Be Reliable
Chenglei Si,
Zhe Gan,
Zhengyuan Yang,
Shuohang Wang,
Jianfeng Wang,
Jordan Boyd-Graber,
Lijuan Wang
ICLR 2023
[paper]
[code]
[tweet]
[video]
-
Sub-Character Tokenization for Chinese Pretrained Language Models
Chenglei Si*,
Zhengyan Zhang*,
Yingfa Chen*,
Fanchao Qi,
Xiaozhi Wang,
Zhiyuan Liu,
Yasheng Wang,
Qun Liu,
Maosong Sun
TACL 2023
[paper]
[code]
-
Re-Examining Calibration: The Case of Question Answering
Chenglei Si,
Chen Zhao,
Sewon Min,
Jordan Boyd-Graber
EMNLP 2022 Findings
[paper]
[code]
[video]
Travels
- Oct 2023, UIST @ San Francisco
- July 2023, ACL @ Toronto
- May 2023, ICLR @ Kigali
- March 2023, Visit Day @ Stanford
- March 2023, Visit Day @ NYU
- Dec 2022, EMNLP @ Abu Dhabi
- Summer 2022, Internship + NAACL @ Seattle