Opportunistic Experiments on a Large-Scale Survey of Diverse Artificial Vision Models in Prediction of 7T Human fMRI Data
Colin Conwell, Jacob Prince, George Alvarez, Talia Konkle, Havard University, United States; Kendrick Kay, University of Minnesota, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
What can we learn from large-scale comparisons between deep neural network models and brain responses? Model-to-brain benchmarking approaches (e.g. BrainScore) typically seek the most predictive model of a biological system. Here, we take a different approach, performing opportunistic experiments over pretrained models to examine whether controlled variation in learning pressures from architecture, task, and input yield better or worse correspondence to brain data. We survey the accuracy of 197 models in predicting the responses of 29842 ventral stream voxels from the 7T fMRI Natural Scenes Dataset, performing targeted comparisons in architecture (e.g. CNNs versus Transformers), task (e.g. SimCLR versus CLIP versus SLIP), and input (e.g. ImageNet versus VGGFace), with both weighted and unweighted representational similarity analysis. Counter-intuitively, we find that brain predictivity levels are broadly unaffected by changes in inductive biases (e.g. architecture or training), and instead depend strongly on the brain-to-model mapping method employed, as well as the apparent diversity of the input data used for training.