A New Computational Framework for Estimating Spatio-temporal Population Receptive Fields in Human Visual Cortex
Insub Kim, Eline Kupers, Kalanit Grill-Spector, Stanford University, United States; Garikoitz Lerma-Usabiaga, Basque Center on Cognition, Brain and Language, Spain; Won Mok Shim, Sungkyunkwan University, Korea (South)
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Visual cortex process stimulus information over space and time. Conventional population receptive field (pRF) method Dumoulin & Wandell, 2008; Kay, Winawer, Mezer, & Wandell, 2013) successfully estimate spatial receptive fields in visual cortex. However, characteristics of temporal receptive fields at the voxel-level have not been determined. Here, we developed a compressive spatio-temporal (CST) population receptive field (pRF) model that simultaneously estimates spatial (visual degrees) and temporal (milliseconds) receptive fields in each voxel of human visual cortex. To test computational validity and reproducibility, we developed a simulation software that simulates BOLD timeseries and systematically tests the performance of spatio-temporal pRF models. Simulations revealed that ground-truth spatio-temporal pRF parameters are accurately recovered from simulated BOLD timeseries. Using fMRI in 10 participants, we found that (i) The CST model better predicts BOLD responses than the conventional pRF model in all tested visual areas. (ii) Temporal windows are larger in central than peripheral eccentricities in all areas. (iii) Ascending the hierarchy of the ventral stream, temporal receptive field sizes and the contribution of transient responses progressively increases. Together these results open an exciting new computational and empirical framework to model fine-grained spatio-temporal dynamics of neural responses in the human brain using fMRI.