Modelling inter-animal variability
Javier Sagastuy-Brena, Imran Thobani, Aran Nayebi, Rosa Cao, Dan Yamins, Stanford University, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Accurately measuring similarity between different animals’ neural responses is a crucial step towards evaluating deep neural network (DNN) models of the brain. Under what transform class are animals likely to be similar to each other, and how much neural data needs to be collected to get an accurate similarity estimate? Using model variability as a proxy for inter-animal variability, we find that where we measure similarity from has critical implications for the suitable transform class. Specifically, we observe high linear mappability between pre-ReLU activations, but require a simple non-linear mapping class (that combines logistic regression with linear regression) in the case of post-ReLU activations. With our approach, we estimate that measuring inter-animal variability requires collecting neural data for at least 500 stimuli and 300 neurons from the same hypercolumn, providing a prescription for future experimental data that can adjudicate between models.