Do Convolutional Neural Networks Model Inferior Temporal Cortex Because of Perceptual or Semantic Features?
Anna Truzzi, Rhodri Cusack, Trinity College Dublin, Ireland
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Convolutional neural networks (CNNs) have proven a valuable model of the inferior temporal cortex (IT) of the human brain. One seductive explanation is that CNNs and IT have learned similar category-specific image features (henceforth semantic features). However, a recent study found that an untrained neural network with random weights modelled IT just as well as trained networks. This might be because the architecture alone of CNNs causes them to effectively extract basic perceptual features, which are also known to be represented in IT. Alternatively, untrained and trained networks may capture different aspects of IT representation - perceptual and semantic features, respectively. Here we test this hypothesis using mediation models with perceptual and semantic mediators. We find that different networks, and different layers in the same network, capture distinct features of the representation in IT, highlighting the risk of solely using a superficial similarity metric when the aim is to achieve a deeper understanding of representations in the brain and CNNs.