Common Encoding Axes for Face Selectivity and Non-face Objects in Macaque Face Cells
Kasper Vinken, Margaret Livingstone, Harvard Medical School, United States; Talia Konkle, Harvard University, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Face cells are neurons that generally respond more to faces than non-faces, leading to the widespread belief that they are specifically and uniquely involved with face processing. Most face-cell studies have exclusively used faces to characterize and model face-cell tuning with no regard for non-face response variability (e.g., Freiwald, Tsao, Livingstone, 2009; Chang et al., 2021). Here we ask whether non-face responses in macaque inferotemporal (IT) cortex contain information on face-cell tuning that cannot be characterized with faces. We found that the response structure for non-faces can predict a neural site’s face versus non-face selectivity better than the response structure for faces. The link between non-face responses and face selectivity was not explained by color or intuitive shape features, but by complex image statistics encoded by higher ImageNet-trained DNN layers. We further show that face cells do not encode the extent to which an object looks like a face. Overall, our work contradicts the assumption that face cells owe their face selectivity to face-specific information, instead providing evidence for the notion that category-selective neurons are best understood as tuning directions in a domain-general object space.