How many non-linear computations are required for CNNs to account for the response properties of V1?
Hui-Yuan Miao, Hojin Jang, Frank Tong, Vanderbilt university, United States
Session:
Posters 2 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Computational models of the primary visual cortex (V1) have suggested that V1 neurons behave like Gabor filters followed by simple non-linearities. However, a recent study that used convolutional neural networks (CNNs) to predict V1 activity concluded that V1 relies on far more non-linear computations than previously thought. Specifically, neuronal responses in monkey V1 to 1000s of natural and synthesized images were best predicted by an intermediate layer of VGG-19, after several non-linear operations. However, the lower layers of VGG-19 might have performed poorly due to other factors. Here, we re-evaluated this issue by testing the performance of AlexNet, which has much larger receptive fields in its lower layers. In contrast to VGG-19's performance, AlexNet's first convolutional layer best predicted V1 responses. Furthermore, a control analysis revealed that the best-performing layer of VGG-19 shifted to lower layers after the input images were rescaled to be smaller. We further showed that a modified version of AlexNet could match the performance of VGG-19 after just a few non-linear computations. Taken together, our findings demonstrate that the response properties of V1 neurons can be fully explained by incorporating only a few non-linear computations.