My research aims to develop principled reinforcement learning (RL) algorithms that obtain state-of-the-art performance with a higher degree of simplicity, scalability, and robustness than current methods. Much of my work uses ideas for probabilistic inference to make progress on a important problems in RL (e.g., long-horizon and high-dimensional reasoning, robustness, exploration).
Bio: I did my PhD in machine learning at CMU, advised by Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD.
Join us! I am hiring new students at all levels, a postdoc, and a grant manager. Read this before emailing me.
news
Mar 18, 2024
Welcome to Princeton PhD Visit Days! If you’re visiting and want to learn more about the labs doing reinforcement learning (RL), shoot me an email or drop by the meeting with AI/ML faculty.
The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.
2024
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View
Raj Ghugare, Geist Matthieu, Glen Berseth, and Benjamin Eysenbach
In The Twelfth International Conference on Learning Representations, 2024