Optimizing fidelity of uncertainty representation in distributional codes
Mehrdad Salmasi, Maneesh Sahani, University College London, United Kingdom
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
The brain perceives the world through noisy sensory signals. Experimental findings suggest that the brain conducts probabilistic inference over variables of interest by taking into account aleatoric and epistemic uncertainty of the sensory signals. Different hypothesis have been proposed for encoding distributional beliefs in the brain. Distributed distributional coding (DDC) suggests that a distribution is represented by the expected values of a set of encoding functions. How the brain learns efficient encoding functions for representing beliefs has not been understood yet. Here we propose an information-theoretic approach for learning an optimal representation of uncertainty. We assume that the distributional beliefs have a sparse representation in some basis, and employ a generative model with hierarchical priors to model noisy DDC values of the belief. Given the noisy DDC measurements, we find a posterior distribution over the distributional beliefs. The DDC encoding functions are modified to minimize brain's uncertainty about the belief; this is achieved by minimizing the entropy of the distribution over the beliefs. We show that the learned encoding functions can capture some of the properties of neuronal tuning functions.