Benjamin Eysenbach

Assistant Professor of Computer Science at Princeton University.

prof_pic.jpg

Room 416

35 Olden St

Princeton NJ 08544

eysenbach@princeton.edu

My research aims to develop principled reinforcement learning (RL) algorithms that obtain state-of-the-art performance with a higher degree of simplicity, scalability, and robustness than current methods. Much of my work uses ideas for probabilistic inference to make progress on a important problems in RL (e.g., long-horizon and high-dimensional reasoning, robustness, exploration).

Bio: I did my PhD in machine learning at CMU, advised by Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD.

Join us! I am hiring new students at all levels, a postdoc, and a grant manager. Read this before emailing me.

news

Mar 18, 2024 Welcome to Princeton PhD Visit Days! If you’re visiting and want to learn more about the labs doing reinforcement learning (RL), shoot me an email or drop by the meeting with AI/ML faculty.
Mar 11, 2024 New paper on planning by interpolation! With Vivek Meyers.
Nov 1, 2023 Excited to share work that will be presented at ICLR 2024!
Sep 1, 2023 :apple: Spring 2024 I’ll be teaching the new undergraduate Introduction to RL (COS 435 / ECE 433), together with Mengdi Wang!

selected publications

The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.

2024

  1. raj_augmentation.png
    Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View
    Raj Ghugare, Geist Matthieu, Glen Berseth, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  2. td_cpc.png
    Contrastive Difference Predictive Coding
    Chongyi Zheng, Ruslan Salakhutdinov, and Benjamin Eysenbach
    In The Twelfth International Conference on Learning Representations, 2024
  3. stabilizing.gif
    Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
    Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, and Sergey Levine
    In The Twelfth International Conference on Learning Representations, 2024

2023

  1. cvl.png
    Contrastive value learning: Implicit models for simple offline rl
    Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, and Jonathan Tompson
    In Conference on Robot Learning, 2023
  2. alm.png
    Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
    Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, and Russ Salakhutdinov
    In International Conference on Learning Representations , 2023
  3. ac_connection.png
    A Connection between One-Step RL and Critic Regularization in Reinforcement Learning
    Benjamin Eysenbach, Matthieu Geist, Sergey Levine, and Ruslan Salakhutdinov
    In International Conference on Machine Learning, 2023
  4. laeo.png
    Contrastive Example-Based Control
    Kyle Beltran Hatch, Benjamin Eysenbach, Rafael Rafailov, Tianhe Yu, Ruslan Salakhutdinov, Sergey Levine, and Chelsea Finn
    In Learning for Dynamics and Control Conference, 2023
  5. thesis.png
    Probabilistic Reinforcement Learning: Using Data to Define Desired Outcomes, and Inferring How to Get There
    Benjamin Eysenbach
    PhD Thesis, Carnegie Mellon University, 2023

2022

  1. mnm_lower_bound.gif
    Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
    Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, and Ruslan Salakhutdinov
    In Advances in Neural Information Processing Systems, 2022
  2. contrastive.gif
    Contrastive Learning as Goal-Conditioned Reinforcement Learning
    Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  3. ocbc.gif
    Imitating Past Successes can be Very Suboptimal
    Benjamin Eysenbach, Soumith Udatha, Russ R Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2022
  4. info_geometry.gif
    The Information Geometry of Unsupervised Reinforcement Learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2022
  5. maxent_robust.gif
    Maximum Entropy RL (Provably) Solves Some Robust RL Problems
    Benjamin Eysenbach, and Sergey Levine
    In International Conference on Learning Representations, 2022

2021

  1. rce.gif
    Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
    Benjamin Eysenbach, Sergey Levine, and Ruslan Salakhutdinov
    Advances in Neural Information Processing Systems, 2021
  2. c_learning_sawyer.gif
    C-Learning: Learning to Achieve Goals via Recursive Classification
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In International Conference on Learning Representations, 2021
  3. rpc_teaser.gif
    Robust Predictable Control
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2021

2019

  1. sorb.png
    Search on the replay buffer: Bridging planning and reinforcement learning
    Benjamin Eysenbach, Ruslan Salakhutdinov, and Sergey Levine
    In Advances in Neural Information Processing Systems, 2019