Linking human and artificial neural representations underlying face recognition: Insights from MEG and CNNs
Hamza Abdelhedi, Karim Jerbi, University of Montreal, Canada
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Mounting evidence suggests that biological and artificial neural networks trained on similar tasks can exhibit remarkable functional similarities. In particular, Convolutional Neural Networks (CNNs) trained on object recognition have been shown to learn representations that model the processing hierarchy in the human visual system. But what about the specific case of face recognition? Would CNNs trained on face recognition learn representations that capture the neural dynamics of the neural circuits that mediate face recognition? Here, we investigated the putative similarities between the representations learned by three CNN architectures trained on face recognition (FaceNet, ResNet50 and CORnet-S) and brain patterns assessed with magnetoencephalography (MEG) recorded during a face recognition tasks. We found that the neuromagnetic brain signals (especially in visual areas) was correlated with activations in multiple CNN layers. However, these correlations were not as strong as those observed in the more general task of object recognition. This may suggest that CNNs trained on face recognition capture only a small portion of the complexity of the real brain patterns associated with face recognition. Our results contribute to an emerging stream of research that seeks to probe face recognition mechanisms by joint exploration of the associated neural representations in both brains and machines.