Learning Invariant Object Representations through Local Prediction Error Minimization in a Model of Generative Vision
Matthias Brucklacher, Sander M. Bohte, Jorge F. Mejias, Cyriel M. A. Pennartz, University of Amsterdam, Netherlands
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
The visual processing stream is capable of both inferring object identity under changes of viewing conditions, and of using object knowledge to fill in for missing sensory information. Current models of invariant recognition process information in a feedforward manner, leaving the question open how the top-down pathway is trained. Here, we show that predictive coding networks as a model of generative perception acquire object representations invariant to rotational angle, scale and lateral position when trained on sequences of continuously moving objects. The network reconstructs whole images from partially occluded inputs akin to observed decodability of occluded scene information in human early visual areas, which can only be addressed by models with information-carrying recurrent connections. Furthermore, the resulting dynamics of error-encoding neurons in the model provide a novel angle for experimental research on neural encoding of prediction errors.