Modeling naturalistic face processing in humans with deep convolutional neural networks
Guo Jiahui, Ma Feilong, James V. Haxby, Dartmouth College, United States; Matteo Visconti di Oleggio Castello, University of California, Berkeley, United States; Samuel A. Nastase, Princeton University, United States; M. Ida Gobbini, Università di Bologna, Italy
Posters 2 Poster
Pacific Ballroom H-O
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Deep convolutional neural networks (DCNNs) trained for face identification can rival and even exceed human-level performance. The relationships between internal representations learned by DCNNs and those of the primate face processing system are not well understood, especially in naturalistic settings. We developed the largest naturalistic dynamic face stimulus set in human neuroimaging research (700+ naturalistic video clips of unfamiliar faces) to investigate this problem. DCNN representational geometries were weakly but significantly correlated with neural response geometries across the human face processing system. Intermediate layers better matched visual, face-selective cortices, and behavioral similarity judgments than the final fully-connected layers. Our results showed DCNNs captured only a small amount of the rich information in the neural representations during naturalistic face viewing. Future artificial neural networks trained with more ecological objective functions may help advance artificial intelligence toward the ultimate goal of mimicking human intelligence in naturalistic, real-world scenarios.