Neuro-Nav: A Library for Neurally-Plausible Reinforcement Learning
Arthur Juliani, Ida Momennejad, Microsoft Research, United States; Samuel Barnett, Princeton University, United States; Brandon Davis, Massachusetts Institute of Technology, United States; Margaret Sereno, University of Oregon, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
In this work we propose Neuro-Nav, an open-source library for neurally plausible reinforcement learning (RL). RL is among the most common modeling frameworks for studying decision making, learning, and navigation in biological organisms. In utilizing RL, cognitive scientists often handcraft environments and agents to meet the needs of their particular studies. On the other hand, artificial intelligence researchers often struggle to find benchmarks for neurally and biologically plausible representation and behavior (e.g., in decision making or navigation). In order to streamline this process across both fields with transparency and reproducibility, Neuro-Nav offers a set of standardized environments and RL algorithms drawn from canonical behavioral and neural studies in rodents and humans. We demonstrate that the toolkit replicates relevant findings from a number of studies across both cognitive science and RL literatures, and can furthermore be extended with novel algorithms and environments to address future research needs of the field.