Scaling up the Evaluation of Recurrent Neural Network Models for Cognitive Neuroscience
Nathan Cloos, Guangyu Robert Yang, Christopher J. Cueva, Massachusetts Institute of Technology, Belgium; Moufan Li, Tsinghua University, China
Session:
Posters 2 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Neural networks are now widely used for modeling neural activity in the brain. They have been particularly successful in modeling the visual system, using mostly feedforward networks and leveraging community-wide efforts centered around benchmarks to both improve model architectures and evaluate model fits to data. Now that recurrent neural networks (RNNs) are also used to model a larger variety of brain functions, there is a similar need for developing appropriate metrics to compare these models. Towards this goal, we have built a high-throughput pipeline for training different RNN models on a wide range of tasks and comparing them to experimental datasets through a variety of analysis methods. We find quantitative similarity scores that agree with qualitative similarity, and then use these quantitative scores to identify more general insights that are common to our best RNN models across tasks and brain regions. Our results also suggest limitations in existing models and ways they can be improved. For example, we find that models trained with more realistic outputs are more similar to the neural data. We think this kind of systematic model evaluation pipeline, which includes multiple types of comparison methods, constitutes an important step towards multi-system models of the brain.