Benjamin Eysenbach

Assistant Professor of Computer Science at Princeton University.
Affiliated/Associated Faculty with the Princeton Program in Cognitive Science and the Princeton Language Initiative, and Natural and Artificial Minds.

headshot_small.jpg

Room 416

35 Olden St

Princeton NJ 08544

eysenbach@princeton.edu

I design reinforcement learning (RL) algorithms: AI methods that learn how to make intelligent decisions from trial and error. I am especially interested in self-supervised methods, which enable agents to learn intelligent behaviors without labels or human supervision. Our group has developed some of the foremost algorithms and analysis for such self-supervised RL methods. Here are a few example papers; here and here are some tutorials to learn more about our research. My work has been recognized by an NSF CAREER Award, a Hertz Fellowship, an NSF GRFP Fellowship, and the Alfred Rheinstein Faculty Award. I run the Princeton Reinforcement Learning Lab.

Before joining Princeton, I did by PhD in machine learning at CMU under Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD. My undergraduate studies were in math at MIT.

Join us! I am not hiring PhD students in Fall 2025. I am hiring a postdoc. Please read this page before emailing me about joining the lab.

news

Dec 1, 2025 :palm_tree: Princeton RL @ NeurIPS 2025! I’m excited to share progress we’ve made in RL algorithms (and the many problems still unsolved): We’re also presenting preliminary work at several of the workshops: :brain: Finally, I’m organizing a NeurIPS 2025 workshop, Data on the Brain & Mind, together with Eva Yi Xie, Catherine Ji, Vivek Myers, Archer Wang, Mahsa Bastankhah, Jenelle Feather, Erin Grant, and Richard Gao. On December 7th.
Oct 2, 2025 :rocket: Honored and excited to receive a GPU grant from NVIDIA for research into Interactive Generative Models that Learn via Discovery, not Data.
Jul 15, 2025 :star: Awarded the NSF CAREER award for Unsupervised and Autonomous Reinforcement Learning of Skills
Jul 14, 2025 :mountain: I gave an ICML tutorial on generative AI and reinforcement learning intrinsic motivation and self-supervised RL, together with Amy Zhang. Recording and slides are available on the tutorial website.
Jun 11, 2025 :shamrock: I gave a tutorial on intrinsic motivation and self-supervised RL at RLDM! Recording and slides are available on the tutorial website.
Apr 24, 2025 :airplane: Princeton RL @ ICLR 2025! Some say hi in Singapore!
Jan 7, 2025 :star: Awarded a grant from the Princeton AI Lab to study ``Do brains perceive, act, and plan using temporal contrast?’’ together with Nathaniel Daw.
Aug 9, 2024 In attempts to change perceptions about who does RL, we’ve put together a poster of Notable Women in RL!

selected publications

The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.

2025

  1. scaling-crl.gif
    1000 layer networks for self-supervised rl: Scaling depth can enable new goal-reaching capabilities
    Kevin Wang, Ishaan Javali, MichaĹ Bortkiewicz, Benjamin Eysenbach, and  others
    arXiv preprint arXiv:2503.14858, 2025
  2. InFOM.gif
    Intention-Conditioned Flow Occupancy Models
    Chongyi Zheng, Seohong Park, Sergey Levine, and Benjamin Eysenbach
    arXiv preprint arXiv:2506.08902, 2025
  3. epr.gif
    Identifying nonequilibrium degrees of freedom in high-dimensional stochastic systems
    Catherine Ji, Ravin Raj, Benjamin Eysenbach, and Gautam Reddy
    arXiv preprint arXiv:2508.08247, 2025
  4. cube.gif
    Contrastive Representations for Temporal Reasoning
    Alicja Ziarko, Michal Bortkiewicz, Michal Zawalski, Benjamin Eysenbach, and Piotr Milos
    arXiv preprint arXiv:2508.13113, 2025
  5. horizon.gif
    Invariance to Planning in Goal-Conditioned RL
    Catherine Ji, Vivek Myers, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  6. jaxgcrl.gif
    Accelerating Goal-Conditioned Reinforcement Learning Algorithms and Research. Led by Michał Bortkiewicz
    Michał Bortkiewicz, Władek Pałucki, Vivek Myers, Tadeusz Dziarmaga, Tomasz Arczewski, Łukasz Kuciński, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025
  7. sgcrl.gif
    A Single Goal is All You Need: Skills and Exploration Emerge from Contrastive RL without Rewards, Demonstrations, or Subgoals
    Grace Liu, Michael Tang, and Benjamin Eysenbach
    In The Thirteenth International Conference on Learning Representations, 2025

2024

  1. empowerment.svg
    Learning to Assist Humans without Inferring Rewards
    Vivek Myers, Evan Ellis, Sergey Levine, Benjamin Eysenbach, and Anca Dragan
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
  2. interpolation.png
    Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference
    Benjamin Eysenbach, Vivek Myers, Russ Salakhutdinov, and Sergey Levine
    In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024
  3. cmd.png
    Learning Temporal Distances: Contrastive Successor Features Can Provide a Metric Structure for Decision-Making
    Vivek Myers, Chongyi Zheng, Anca Dragan, Sergey Levine, and Benjamin Eysenbach
    In Forty-first International Conference on Machine Learning, 2024
  4. raj_augmentation.png
    Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View
    Raj Ghugare, Geist Matthieu, Glen Berseth, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  5. td_cpc.gif
    Contrastive Difference Predictive Coding
    Chongyi Zheng, Ruslan Salakhutdinov, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  6. stabilizing.gif
    Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
    Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, and Sergey Levine
    In The Twelfth International Conference on Learning Representations, 2024

2023

  1. cvl.png
    Contrastive value learning: Implicit Models for Simple Offline RL
    Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, and Jonathan Tompson
    In Conference on Robot Learning, 2023
  2. alm.png
    Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
    Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Russ Salakhutdinov
    In International Conference on Learning Representations , 2023
  3. ac_connection.png
    A Connection between One-Step RL and Critic Regularization in Reinforcement Learning
    Benjamin Eysenbach, Matthieu Geist, Sergey Levine, and Ruslan Salakhutdinov
    In International Conference on Machine Learning, 2023
  4. laeo.png
    Contrastive Example-Based Control
    Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, and Chelsea Finn
    In Learning for Dynamics and Control Conference, 2023
  5. thesis.png
    Probabilistic Reinforcement Learning: Using Data to Define Desired Outcomes, and Inferring How to Get There
    Benjamin Eysenbach
    PhD Thesis, Carnegie Mellon University, 2023

2022

  1. mnm_lower_bound.gif
    Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
    Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, and Ruslan Salakhutdinov
    In Advances in Neural Information Processing Systems, 2022
  2. contrastive.gif
    Contrastive Learning as Goal-Conditioned Reinforcement Learning
    Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  3. ocbc.gif
    Imitating Past Successes can be Very Suboptimal
    Benjamin Eysenbach, Soumith Udatha, Russ R Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  4. info_geometry.gif
    The Information Geometry of Unsupervised Reinforcement Learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2022
  5. maxent_robust.gif
    Maximum Entropy RL (Provably) Solves Some Robust RL Problems
    Benjamin Eysenbach, and Sergey Levine
    In International Conference on Learning Representations, 2022

2021

  1. rce.gif
    Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
    Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov
    Advances in Neural Information Processing Systems, 2021
  2. c_learning_sawyer.gif
    C-Learning: Learning to Achieve Goals via Recursive Classification
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2021
  3. rpc_teaser.gif
    Robust Predictable Control
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2021

2019

  1. sorb.png
    Search on the replay buffer: Bridging planning and reinforcement learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2019
  2. diayn.gif
    Diversity is All You Need: Learning Skills without a Reward Function
    Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine
    In International Conference on Learning Representations, 2019