Hi!

I’m currently a second-year PhD student at Berkeley AI Research, advised by Anca Dragan and Dan Klein.

I’m interested in building agents that can collaborate and interact with humans, and use language as a medium to do so. I spend part of my week at Lilt, exploring these questions in the context of human-in-the-loop machine translation (based on my experiences there, I wrote an essay about interesting directions for human-in-the-loop ML in industry). Currently, I’m excited about interactive code generation models, dialog, and language-guided RL.

Previously, I graduated with a double-major in computer science / electrical engineering and philosophy from MIT. There, I did research on human-inspired AI with the Computational Cognitive Science Group, advised by Kelsey Allen and Josh Tenenbaum, and adversarial examples / machine learning security as a founding member of labsix. I also spent a great summer with the Natural Language Understanding group at Google Research NY, advised by David Weiss.

I read stuff, take photos, and write things. If it sounds like we might get along, let me know! 😄

Publications

InCoder: A Generative Model for Code Infilling and Synthesis

Daniel Fried*, Armen Aghajanyan*, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis

Preprint, 2022.

We open-source a new large language model for code that can both generate and fill-in-the-blank to do tasks like docstring generation, code rewriting, type hint inference, and more.
[Paper] [Twitter] [Demo] [Site] [Code]
Automatic Correction of Human Translations

Jessy Lin, Geza Kovacs, Aditya Shastry, Joern Wuebker, John DeNero

NAACL 2022. Best Task Paper, Best Resource Paper, Best Theme Paper Honorable Mention.

We introduce the task of translation error correction and show how models can augment professional translators to produce higher quality translations.
[Paper] [Twitter] [Data]
Towards Flexible Inference in Sequential Decision Problems via Bidirectional Transformers

Micah Carroll, Jessy Lin, Orr Paradise, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, Sam Devlin

Generalizable Policy Learning in the Physical World, ICLR 2022.

We show how a single model trained with a BERT-like masked prediction objective can unify inference in sequential decisionmaking settings (e.g. for RL): behavior cloning, future prediction, and more.
[Paper] [Twitter]
Inferring Rewards from Language in Context

Jessy Lin, Daniel Fried, Dan Klein, Anca Dragan

ACL 2022.

We infer human preferences (reward functions) from language.
[Paper] [Twitter] [Code]
Black-box Adversarial Attacks with Limited Queries and Information

Andrew Ilyas*, Logan Engstrom*, Anish Athalye*, Jessy Lin*

ICML 2018.

We generate adversarial examples for real-world ML systems like the Google Cloud Vision API using only access to predicted labels.
[Paper] [Blog] [Code]