:fire: JaxGCRL: A new benchmark for goal-conditioned RL is blazing fast, allowing you to train at 1 million steps per minute on 1 GPU. Experiments run so fast that the algorithm design process becomes interactive. Tools like this not only make research much more accessible (e.g., you can now run a bunch of interesting experiments in a free Colab notebook before the 90 min timeout), but also will change how RL is taught (less fighting with dependencies, more experiments on complex tasks, less waiting for experiments to queue and finish); stay tuned for COS 435 this Spring!