Factorized convolution models for interpreting neuron-guided images synthesis
Binxu Wang, Carlos Ponce, Harvard Medical School, United States
Posters 2 Poster
Pacific Ballroom H-O
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -7)
Convolutional neural networks have been used extensively to model neurons in visual systems of primates and rodents. However, this is an ill-posed regression problem, because the number of image-response pairs are often far fewer than the feature regressors. Because of this, numerous combinations of weights could fit the training set equally well. Previous neuron-modeling methods used unsupervised feature reduction and penalized regression to tackle this problem. Yet this solution discards the spatial structure of feature units for the sake of computational efficiency. Because of this, these approaches usually result in non-smooth or non-local weight structures, making it hard for interpretation and further investigation of visual selectivity. Here, we propose a "supervised" feature reduction method, which calculates covariance of image features with the neuron and then uses tensor factorization to find the feature and spatial factors characterizing the spatial and feature selectivity of the neuron. This method could be combined with normal penalized regression. This method is as efficient and accurate as previous penalized regression methods in predicting neurons and faster than previous factorized models. Moreover, it provides more accurate localization of receptive fields, which benefits interpretation of the preferred feature of neuron. In this manner, we are able to transform a dense "black-box" model of visual neuron into a low rank, part-based model, easier to describe and investigate, thus advancing not only the ability to model neurons but to explain their tuning.