Hi!
I’m a fifth-year PhD student at Berkeley AI Research, advised by Anca Dragan and Dan Klein. My research is supported by the Apple Scholars in AI Fellowship. I’m also a visiting researcher at Meta AI.
I work on building AI agents that can collaborate with humans.
Previously, I worked on research and product at Lilt, working on human-in-the-loop machine translation / Copilot for expert translators. I graduated with a double-major in computer science and philosophy from MIT, where I did research on human-inspired AI with the Computational Cognitive Science Group, advised by Kelsey Allen and Josh Tenenbaum, and machine learning security as a founding member of labsix. I also spent a great summer with the Natural Language Understanding group at Google Research NY, advised by David Weiss.
Email
/
Github
/
Twitter/X
/
Scholar
Research
Learning to Model the World with Language
Jessy Lin,
Yuqing Du,
Olivia Watkins,
Danijar Hafner,
Pieter Abbeel,
Dan Klein,
Anca Dragan
ICML 2024 (oral, top 1.5%)
We introduce Dynalang, an agent that leverages diverse types of language to solve tasks by using language to predict the future in a multimodal world model.
Decision-Oriented Dialogue for Human-AI Collaboration
Jessy Lin*,
Nicholas Tomlin*,
Jacob Andreas,
Jason Eisner
TACL/ACL 2024, LLM Agents @ ICLR 2024
We introduce a new task and suite of environments to evaluate how agents like LLMs can assist humans with everyday decision-making.
InCoder: A Generative Model for Code Infilling and Synthesis
Daniel Fried*,
Armen Aghajanyan*,
Jessy Lin,
Sida Wang,
Eric Wallace,
Freda Shi,
Ruiqi Zhong,
Wen-tau Yih,
Luke Zettlemoyer,
Mike Lewis
ICLR 2023 (spotlight, top 6%)
We open-source a new large language model for code that can both generate and fill-in-the-blank to do tasks like docstring generation, code rewriting, type hint inference, and more.
Automatic Correction of Human Translations
Jessy Lin,
Geza Kovacs,
Aditya Shastry,
Joern Wuebker,
John DeNero
NAACL 2022 Best Task Paper, Best Resource Paper, Best Theme Paper Honorable Mention.
We introduce the task of translation error correction and show how models can augment professional translators in-the-loop.
UniMASK: Unified Inference in Sequential Decision Problems
Micah Carroll,
Orr Paradise,
Jessy Lin,
Raluca Georgescu,
Mingfei Sun,
David Bignell,
Stephanie Milani,
Katja Hofmann,
Matthew Hausknecht,
Anca Dragan,
Sam Devlin
NeurIPS 2022 (oral, top 1.8%)
We show how a single model trained with a BERT-like masked prediction objective can unify inference in sequential decisionmaking settings (e.g. for RL): behavior cloning, future prediction, and more.
Inferring Rewards from Language in Context
Jessy Lin,
Daniel Fried,
Dan Klein,
Anca Dragan
ACL 2022
We infer human preferences (reward functions) from language.
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas*,
Logan Engstrom*,
Anish Athalye*,
Jessy Lin*
ICML 2018
We generate adversarial examples for real-world ML systems like the Google Cloud Vision API using only access to predicted labels.