Accurate implementation of computational neuroscience models through neural ODEs
Sabine Muzellec, CerCO CNRS; Brown University, France; Mathieu Chalvidal, CerCO CNRS; Brown University; ANITI, France; Thomas Serre, Brown University; ANITI, United States; Rufin VanRullen, CerCO CNRS; ANITI, France
Contributed Talks 4 Lecture
Grand Ballroom A-C
Sun, 28 Aug, 12:05 - 12:25 Pacific Time (UTC -7)
Computational neuroscience models are usually defined by differential equations representing their dynamics. One recently popular solution to implement such models on a large-scale has been to use deep learning tools and associated optimization methods. However, to train models with these approaches, the differential equations need to be discretized and sometimes adapted, which can affect their performance and decrease their dynamical precision. Here, we show that neural Ordinary Differential Equations (neural ODEs), a framework recently introduced in the machine learning community to allow end-to-end training of ODEs, is more suitable for large-scale implementation of computational neuroscience models. Neural ODEs are compatible with a large choice of strategies to solve the differential equations. For instance, the prevailing deep learning approach essentially amounts to using a solver based on Euler discretization, combined with back-propagation training; but more precise solvers and training approaches can be employed. We compare the performance of different neural ODE implementations of two computational neuroscience models on vision tasks : (i) the original (Rao-Ballard) predictive coding model characterizing cortico-cortical feedback in the visual system, and (ii) hGRU (horizontal Gating Recurrent Unit), a model of classical and extra-classical receptive fields of the visual cortex. In both cases, our results show that the standard deep learning method is sub-optimal, and that neural ODEs with higher-order adaptive solvers can help improve the performance and stability of the models.