My research aims to develop principled reinforcement learning (RL) algorithms that obtain state-of-the-art performance with a higher degree of simplicity, scalability, and robustness than current methods. Much of my work uses ideas for probabilistic inference to make progress on a important problems in RL (e.g., long-horizon and high-dimensional reasoning, robustness, exploration).
Bio: I did my PhD in machine learning at CMU, advised by Ruslan Salakhutdinov and Sergey Levine and supported by the NSF GFRP and the Hertz Fellowship. I spent a number of years at Google Brain/Research before and during my PhD.
Join us! I am hiring new students at all levels, a postdoc, and a grant manager. Read this before emailing me.
news
May 1, 2024
Fall 2024 I’ll be teaching an independent work seminar on unsupervised RL (COS 397-S06 for juniors, COS 497-S06 for seniors). If interested, please join the waitlist.
The aim is to highlight a small subset of the work done in the group, and to give a sense for the sorts of problems that we're working on. Please see Google Scholar for a complete and up-to-date list of publications.
2024
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View
Raj Ghugare, Geist Matthieu, Glen Berseth, and Benjamin Eysenbach
In The Twelfth International Conference on Learning Representations, 2024