The Cortical Representation of Linguistic Structures at Different Timescales is Shared between Reading and Listening
Catherine Chen, Tom Dupré la Tour, Jack Gallant, Daniel Klein, Fatma Deniz, UC Berkeley, United States
Session:
Posters 1 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Evidence from functional neuroimaging experiments suggests that many cortical regions are activated in response to both written and spoken language. In these cortical regions language is integrated across different timescales. However, it is unclear whether different linguistic timescales are represented in the same way for written and spoken language. To address this question, we compared cortical representations of different linguistic timescales during reading and listening. We re-analyzed fMRI data recorded while participants read and listened to the same set of natural language narratives \citep{huth2016, deniz2019}. For each modality, we created voxelwise encoding models to determine the linguistic timescale selectivity of each voxel. We then compared the linguistic timescale selectivity of individual voxels between reading and listening. We found that the cortical organization of linguistic timescale selectivity is highly similar between the two modalities. These results suggest that the human cortex contains a hierarchy of areas that each represent particular linguistic timescales, and that this hierarchy is largely independent from stimulus modality.