Contextual Influences on the Perception of Motion and Depth
Zhe-Xin Xu, Greg DeAngelis, University of Rochester, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
One of the fundamental tasks of the visual system is to infer the motion and depth of objects based on their 2D retinal images. This is often complicated by the observer’s body and eye movements. Depending on the viewing geometry, the combination of retinal image motion and eye movements can be used to perform different computations: summation to compute object motion in the world (“coordinate transformation”, CT); or division for inferring depth of the object (“depth from motion parallax”, MP). We investigated how the same signals, retinal motion and eye movements, mediate the perception of motion and depth under different viewing contexts in both humans and recurrent neural networks. We asked human subjects to estimate the motion and depth of an object while simulating different viewing contexts with optic flow, and we found distinct patterns of biases between the CT and MP contexts. Furthermore, an RNN trained on the same tasks represents task-relevant variables similarly to our previous findings on neural responses in area MT. Our study demonstrates that the interaction between retinal and eye velocities can lead to very different percepts, depending on the interpretation of the viewing context, and our RNN model provides novel predictions for neural representations.