Gaze-centered spatial representations in human hippocampus
Zitong Lu, Julie Golomb, The Ohio State University, United States; Anna Shafer-Skelton, University of Pennsylvania, United States
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
As we move our eyes around the world, we are able to integrate visual input and achieve a stable visual percept across eye movements. However, previous studies have found that our visual system, from primary visual cortex to higher level visual regions, represents object locations in natively retinotopic (gaze-centered) but not spatiotopic (gaze-independent) coordinates. Is spatiotopic information represented elsewhere in the brain, or might we achieve gaze-independent behavior via other means? Two key properties of the hippocampus make it an ideal candidate area to search for spatiotopic information: its responsiveness to visual information and its role in other types of complex spatial processing. In this study, we manipulated fixation and stimulus locations in an object perception task and used functional fMRI to record participants’ brain activity. Here, we use correlation-based multi-voxel pattern analysis (MVPA) and representational similarity analysis (RSA) to explore the representation of object location and investigate a potential role for the human hippocampus in visual stability. We found significant retinotopic instead of spatiotopic information not only in LOC and PPA (consistent with prior findings), but also in hippocampus. These results reveal that hippocampus also encodes gaze-centered spatial information, extending findings that the native coordinate system of vision might be retinotopic throughout the brain, with other mechanisms responsible for achieving gaze-independent behavior.