I design reinforcement learning (RL) algorithms: AI methods that learn how to make intelligent decisions from trial and error. I am especially interested in self-supervised methods, which enable agents to learn intelligent behaviors without labels or human supervision. Our group has developed some of the foremost algorithms and analysis for such self-supervised RL methods. Hereare a few examples. I run the Princeton Reinforcement Learning Lab.
Bio: Before joining Princeton, I did by PhD in machine learning at CMU under Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD. My undergraduate studies were in math at MIT.
Join us! (Fall 2024) I am recruiting 1 – 2 PhD students and 1 – 2 MS students to start in Fall 2025. I am also recruiting 1 postdoc and 1 predoc researcher to start in Spring 2025. Please read this page for more information and details on how to apply.
news
Sep 13, 2024
JaxGCRL: A new benchmark for goal-conditioned RL is blazing fast, allowing you to train at 1 million steps per minute on 1 GPU. Experiments run so fast that the algorithm design process becomes interactive. Tools like this not only make research much more accessible (e.g., you can now run a bunch of interesting experiments in a free Colab notebook before the 90 min timeout), but also will change how RL is taught (less fighting with dependencies, more experiments on complex tasks, less waiting for experiments to queue and finish); stay tuned for COS 435 this Spring!
Sep 12, 2024
Upcoming talks:
NYU GRAIL (Oct 2, 2024).
Facebook – Reasoning and Planning (Oct 8, 2024).
Colloquium at Queens College (Oct 21, 2024).
European Workshop on RL: Keynote (Oct 28, 2024).
Aug 13, 2024
Skills and directed exploration seem to emerge from contrastive RL! Check out the website for videos, code, and the full paper! Let by Grace Liu with Michael Tang.
Aug 9, 2024
In attempts to change perceptions about who does RL, we’ve put together a poster of Notable Women in RL!
Jul 1, 2024
Excited to share work that will be presented at ICML 2024!
The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.
2024
Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making
Vivek Myers, Chongyi Zheng, Anca Dragan, Sergey Levine, and Benjamin Eysenbach
In Forty-first International Conference on Machine Learning, 2024