Benjamin Eysenbach

Assistant Professor of Computer Science at Princeton University.
Affiliated/Associated Faculty with the Princeton Program in Cognitive Science and the Princeton Language Initiative, and Natural and Artificial Minds.

headshot_small.jpg

Room 416

35 Olden St

Princeton NJ 08544

eysenbach@princeton.edu

I design reinforcement learning (RL) algorithms: AI methods that learn how to make intelligent decisions from trial and error. I am especially interested in self-supervised methods, which enable agents to learn intelligent behaviors without labels or human supervision. Our group has developed some of the foremost algorithms and analysis for such self-supervised RL methods. Here are a few example papers; here and here are some tutorials to learn more about our research. My work has been recognized by an NSF CAREER Award, a Hertz Fellowship, an NSF GRFP Fellowship, and the Alfred Rheinstein Faculty Award. I run the Princeton Reinforcement Learning Lab.

Before joining Princeton, I did by PhD in machine learning at CMU under Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD. My undergraduate studies were in math at MIT.

Join us! Please read this page before emailing me about joining the lab.

news

Dec 1, 2025 :brain: We’re organizing a NeurIPS 2025 workshop, Data on the Brain & Mind ! Submission deadline for submissions of papers or tutorials is Aug 22.
Jul 15, 2025 :star: Awarded the NSF CAREER award for Unsupervised and Autonomous Reinforcement Learning of Skills
Jul 14, 2025 :mountain: I gave an ICML tutorial on generative AI and reinforcement learning intrinsic motivation and self-supervised RL, together with Amy Zhang. Recording and slides are available on the tutorial website.
Jun 11, 2025 :shamrock: I gave a tutorial on intrinsic motivation and self-supervised RL at RLDM! Recording and slides are available on the tutorial website.
Apr 24, 2025 :airplane: Princeton RL @ ICLR 2025! Some say hi in Singapore!
Mar 21, 2025 :rocket: Check out our new preprint, 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities, led by Kevin Wang and Ishaan Javali and Michał Bortkiewicz!
Feb 1, 2025 :cyclone: Check out our new preprint, Horizon Generalization in Reinforcement Learning, led by Cathy Ji and Vivek Myers!
Jan 7, 2025 :star: Awarded a grant from the Princeton AI Lab to study ``Do brains perceive, act, and plan using temporal contrast?’’ together with Nathaniel Daw.
Jan 2, 2025 :palm_tree: We’re launching a undergraduate research program (REU) together with state and community colleges in NJ. This is a paid program, and no research experience is required. Apply by Feb. 1.
Jan 2, 2025 :apple: I’m teaching Introduction to Reinforcement Learning this Spring, together with a fantastic team of TAs. I create this course to give students a strong foundation in RL and highlight that unifying themes (RL isn’t just a bag of tricks). All course notes and assignments will be posted publicly, so you can follow along!

selected publications

The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.

2025

  1. misl.png
    Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning
    Chongyi Zheng, Jens Tuyls, Joanne Peng, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  2. unconscious.png
    The "Law" of the Unconscious Contrastive Learner: Probabilistic Alignment of Unpaired Modalities
    Yongwei Che, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  3. horizon.gif
    Invariance to Planning in Goal-Conditioned RL
    Catherine Ji, Vivek Myers, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  4. jaxgcrl.png
    Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research. Led by Michał Bortkiewicz
    Michał Bortkiewicz, Władek Pałucki, Vivek Myers, Tadeusz Dziarmaga, Tomasz Arczewski, Łukasz Kuciński, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  5. sgcrl.png
    A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals
    Grace Liu, Michael Tang, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025

2024

  1. empowerment.svg
    Learning to Assist Humans without Inferring Rewards
    Vivek Myers, Evan Ellis, Sergey Levine, Benjamin Eysenbach, and Anca Dragan
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
  2. interpolation.png
    Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference
    Benjamin Eysenbach, Vivek Myers, Russ Salakhutdinov, and Sergey Levine
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
  3. cmd.png
    Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making
    Vivek Myers, Chongyi Zheng, Anca Dragan, Sergey Levine, and Benjamin Eysenbach
    In Forty-first International Conference on Machine Learning, 2024
  4. raj_augmentation.png
    Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View
    Raj Ghugare, Geist Matthieu, Glen Berseth, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  5. td_cpc.png
    Contrastive Difference Predictive Coding
    Chongyi Zheng, Ruslan Salakhutdinov, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  6. stabilizing.gif
    Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
    Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, and Sergey Levine
    In The Twelfth International Conference on Learning Representations, 2024

2023

  1. cvl.png
    Contrastive value learning: Implicit Models for Simple Offline RL
    Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, and Jonathan Tompson
    In Conference on Robot Learning, 2023
  2. alm.png
    Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
    Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Russ Salakhutdinov
    In International Conference on Learning Representations , 2023
  3. ac_connection.png
    A Connection between One-Step RL and Critic Regularization in Reinforcement Learning
    Benjamin Eysenbach, Matthieu Geist, Sergey Levine, and Ruslan Salakhutdinov
    In International Conference on Machine Learning, 2023
  4. laeo.png
    Contrastive Example-Based Control
    Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, and Chelsea Finn
    In Learning for Dynamics and Control Conference, 2023
  5. thesis.png
    Probabilistic Reinforcement Learning: Using Data to Define Desired Outcomes, and Inferring How to Get There
    Benjamin Eysenbach
    PhD Thesis, Carnegie Mellon University, 2023

2022

  1. mnm_lower_bound.gif
    Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
    Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, and Ruslan Salakhutdinov
    In Advances in Neural Information Processing Systems, 2022
  2. contrastive.gif
    Contrastive Learning as Goal-Conditioned Reinforcement Learning
    Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  3. ocbc.gif
    Imitating Past Successes can be Very Suboptimal
    Benjamin Eysenbach, Soumith Udatha, Russ R Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  4. info_geometry.gif
    The Information Geometry of Unsupervised Reinforcement Learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2022
  5. maxent_robust.gif
    Maximum Entropy RL (Provably) Solves Some Robust RL Problems
    Benjamin Eysenbach, and Sergey Levine
    In International Conference on Learning Representations, 2022

2021

  1. rce.gif
    Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
    Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov
    Advances in Neural Information Processing Systems, 2021
  2. c_learning_sawyer.gif
    C-Learning: Learning to Achieve Goals via Recursive Classification
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2021
  3. rpc_teaser.gif
    Robust Predictable Control
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2021

2019

  1. sorb.png
    Search on the replay buffer: Bridging planning and reinforcement learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2019
  2. diayn.gif
    Diversity is All You Need: Learning Skills without a Reward Function
    Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine
    In International Conference on Learning Representations, 2019