Dec 1, 2025 | Princeton RL @ NeurIPS 2025! I’m excited to share progress we’ve made in RL algorithms (and the many problems still unsolved): We’re also presenting preliminary work at several of the workshops: - Low-Rank Successor Representations Capture Human-Like Generalization. Led by Eva Yi Xie, with Nathaniel D. Daw. (Unifying Representations in Neural Models)
- Structured Response Diversity with Mutual Information. Led by Devan Shah, Owen Yang, and Daniel Yang, with Chongyi Zheng. (Scaling Environments for Agents)
- Demystifying emergent exploration in goal-conditioned RL. Led by Mahsa Bastankhah and Grace Liu, with Dilip Arumugam and Thomas L. Griffiths. (Aligning Reinforcement Learning Experimentalists and Theorists; Interpreting Cognition in Deep Learning Models)
- Training LLM Agents to Empower Humans. Evan Ellis, Vivek Myers, Jens Tuyls, Sergey Levine, Anca Dragan. (Deep Learning for Code)
- Unsupervised Contrastive Goal Reaching. Led by Ahmed Turkman, with Raj Ghugare. (Aligning Reinforcement Learning Experimentalists and Theorists)
- Combinatorial Representations for Temporal Reasoning. Led by Alicja Ziarko, with Michał Bortkiewicz, Michał Zawalski, Piotr Miłoś. (Differentiable Learning of Combinatorial Algorithms; Unifying Representations in Neural Models; WiML)
- Horizon Reduction Makes Offline RL Scalable. Led by Seohong Park, with Kevin Frans, Deepinder Mann, Aviral Kumar, Sergey Levine. (Aligning Reinforcement Learning Experimentalists and Theorists)
Finally, I’m organizing a NeurIPS 2025 workshop, Data on the Brain & Mind, together with Eva Yi Xie, Catherine Ji, Vivek Myers, Archer Wang, Mahsa Bastankhah, Jenelle Feather, Erin Grant, and Richard Gao. On December 7th. |