Using Massive Individual fMRI Movie Data to Align Artificial and Brain Representations in an Auditory Network
Maëlle Freteault, Université de Montréal, IMT Atlantique, Canada; Basile Pinsard, Julie Boyle, Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Canada; Pierre Bellec, Université de Montréal, Canada; Nicolas Farrugia, IMT Atlantique, France
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Artificial neural networks have opened new ways to investigate the brain activity evoked by rich stimuli such as movies. The traditional approach is to compare the activity of the brain with an artificial neural network pre-trained on a vision or language task, e.g. image classification in ImageNet. However, a recent study in the language domain suggests that brain encoding can be substantially improved by fine-tuning a pre-trained network to directly predict brain activity. In this study, we develop such an end-to-end fine-tuning of a pre-trained auditory network (called Soundnet) using a massive individual fMRI dataset of movie watching from the Courtois Neuromod Project. We found that fine-tuning led to consistent improvement of brain encoding, in particular in the auditory cortex but also in visual cortices and other distributed brain areas. This work establishes a new approach to build computational models of human auditory processing.