Latent dimensionality scales with the performance of deep learning models of visual cortex
Eric Elmoznino, Michael Bonner, Johns Hopkins University, United States
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Deep learning models of brain systems are often characterized by their training paradigms and architectural constraints. An alternative perspective explains neural networks by the geometry of their latent representational manifolds, rendering training procedures and architectures indirect causal factors. Here, we examined deep neural network models of visual cortex in terms of their core geometric properties by quantifying the latent dimensionality of their responses to natural images. We hypothesized that latent dimensionality governs expected model performance when predicting brain activity. We assessed the accuracy of neural networks at predicting image-evoked activity patterns in visual cortex using both monkey electrophysiology and human fMRI. Our findings reveal a striking dimensionality effect, whereby higher-dimensional models produce higher-fidelity predictions of cortical responses to held-out stimuli. This phenomenon runs counter to the prevailing view that hierarchical visual computations compress dimensionality, and it suggests that latent dimensionality is a governing principle of deep learning in visual neuroscience.