Brain-optimized models reveal increase in few-shot concept learning accuracy across human visual cortex
Ghislain St-Yves, Kendrick Kay, Thomas Naselaris, University of Minneapolis, United States
Posters 2 Poster
Pacific Ballroom H-O
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Humans, unlike machines, learn to distinguish and generalize to new visual concepts with very few observations and often without supervision. It is unclear what properties of representations in the human brain support such few-shot learning. One recent theory shows that few-shot generalization ability is a direct result of the geometry of representations in brain activity space. Here, we characterize the representational geometry that arise from a DNN model that was optimized to predict brain activity in multiple visual areas across human visual cortex. Importantly, this brain-optimized DNN was not trained to solve a computer-vision task, so it does not bias our estimate of representational geometry in favor of any particular objective. We show that the outputs of the model, and by extension the brain, form a sequence of representations that support increasing few-shot learning accuracy with ascension of the visual hierarchy, and identify specific geometric properties of the representations that support this ability.