A contextual encoding model for human ECoG responses to a spoken narrative
Kristijan Armeni, Christopher Honey, Johns Hopkins University, United States; Tal Linzen, New York University, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Language understanding depends on context at multiple levels of linguistic granularity. How does this context-dependence differ across different cortical regions supporting language processing? Recently, it has been shown that electrophysiological responses to narrative stimuli can be predicted by contextualized word vector representations extracted from next-word prediction models. Here, we set out to apply and extend this approach within an electrocorticography (ECoG) dataset of 9 participants listening to a 7-minute narrative. For each word in the story, we predicted the neural response based on: (i) sensory features; (ii) non-contextualized word vectors; and contextualized word vectors with context scrambled at the word, sentence and paragraph levels. We show that contextualized embeddings, on average, are better predictors of broadband high-frequency (70+ Hz) power responses compared with non-contextualized embeddings. Moreover, the improved encoding performance of contextualized embeddings specifically depended on the preceding context being provided intact to the model. These initial results provide the basis for mapping the timescale of context-dependence (word, sentence, and paragraph level) for each intracranial site across the cortical surface.