P-2.64

VOneCAE: Interpreting through the eyes of V1

Subhrasankar Chatterjee, Debasis Samanta, IIT Kharagpur, India

Session:
Posters 2 Poster

Track:
Cognitive science

Location:
Pacific Ballroom H-O

Presentation Time:
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)

Abstract:
Tremendous progress has been made in proposing models that try to explain image understanding in the human brain. However, the available models either lack high prediction accuracy for all visual areas or difficult to interpret with respect to the human visual system. To address this problem, the VOneCAE architecture is introduced in this paper. The VOneCAE model consists of two components: the VOne block which is aimed to improve the biological interpretability, and the Convolutional AutoEncoder (CAE) block, which is planned to construct a compressed feature space using unsupervised learning. Experiments reveal that the VOne block accurately predicts the early visual areas, such as V1 and V2, and the CAE block performs well for the late visual areas, such as V4 and IT. More precisely, the combination of two blocks performs well for all visual areas.

Manuscript:
License:
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
DOI:
10.32470/CCN.2022.1173-0
Publication:
2022 Conference on Cognitive Computational Neuroscience
Presentation
Discussion
Resources
No resources available.
Session P-2
P-2.1: A Unified Account of Adaptive Learning in Different Statistical Environments
Niloufar Razmi, Matthew Nassar, Brown University, United States
P-2.2: Grid representations for efficient generalization
Linda Yu, Matthew Nassar, Brown University, United States
P-2.3: Role of pupil-linked uncertainties and rewards in value-based decision making
Zoe He, Dalin Guo, Angela Yu, University of California San Diego, San Diego, United States; Maëva L’Hôtellier, Alexander Paunov, Florent Meyniel, CEA Paris-Saclay, Gif-sur-Yvette, France Universit ́e de Paris, France
P-2.4: ConvNets Develop Characteristics of Visual Cortex when Receiving Retinal Input
Danny da Costa, Lukas Kornemann, Rainer Goebel, Mario Senden, Maastricht University, Netherlands
P-2.5: Continual Reinforcement Learning with Multi-Timescale Successor Features
Raymond Chua, Blake Richards, Doina Precup, McGill University, Canada; Christos Kaplanis, DeepMind, United Kingdom
P-2.6: Role of Visual Stimuli in Final Seconds of Decision-making
Tanya Upadhyay, Karthika Kamath, Kirtana Sunil Phatnani, Jieya Rawal, Biju Dominic, Fractal Analytics, India
P-2.7: On the role of feedback in visual processing: a predictive coding perspective
Andrea Alamia, Milad Mozafari, Bhavin Choksi, Rufin VanRullen, CerCo, France
P-2.8: Predictive Coding Dynamics Improve Noise Robustness in A Deep Neural Network of the Human Auditory System
Ching Fang, Erica Shook, Justin Buck, Guillermo Horga, Columbia University, United States
P-2.9: How much do we know about visual representations? Quantifying the dimensionality gap between DNNs and visual cortex
Raj Magesh Gauthaman, Michael Bonner, Johns Hopkins University, United States
P-2.10: Using object reconstruction as top-down attentional feedback yields a shape bias and robustness in object recognition
Seoyoung Ahn, Hossein Adeli, Gregory Zelinsky, Stony Brook University, United States
P-2.11: More Than Meets the fMRI: Representational Similarities between Real and Artificially Generated fMRI Data
Pabitra Sharma, Sveekruth Sheshagiri Pai, Indian Institute of Science, Bangalore, India
P-2.12: How many non-linear computations are required for CNNs to account for the response properties of V1?
Hui-Yuan Miao, Hojin Jang, Frank Tong, Vanderbilt university, United States
P-2.13: Compositionally generalizing task structures through hierarchical clustering
Rex Liu, Michael Frank, Brown University, United States
P-2.14: Locally Euclidean Cognitive Maps for a Spherical Surface
Misun Kim, Christian F Doeller, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
P-2.15: 20 and 40-Hz Flickering-Light Stimulation Induces Changes in Functional Connectivity of Memory-Related Areas
Jeongwon Lee, Donggon Jang, Kassymzhomart Kunanbayev, Dae-Shik Kim, KAIST, Korea (South)
P-2.16: Predicting proprioceptive cortical anatomy and neural coding with topographic autoencoders
Max Grogan, A. Aldo Faisal, Imperial College London, United Kingdom; Lee Miller, Kyle Blum, Northwestern University, United States
P-2.17: Primate Orbitofrontal Learning of Environmental States
David Barack, University of Pennsylvania, United States; C Daniel Salzman, Columbia University, United States
P-2.18: CoGraph: Mapping the Structure of the Cognitive Sciences, Neurosciences, & AI
Andrew Hansen, Joachim Vandekerckhove, Megan Peters, University of California - Irvine, United States; Arjun Pradesh, Indian Institute of Technology - Palakkad, India
P-2.19: Learning Cortical Magnification with Brain-Optimized Convolutional Neural Networks
Florian Mahner, Katja Seeliger, Martin Hebart, Max Planck Institute for Human Cognitive and Brain Sciences, Germany; Umut Güçlü, Donders Institute for Brain, Cognition and Behaviour, Netherlands
P-2.20: Many but not All Deep Neural Network Audio Models Predict Auditory Cortex Responses and Exhibit Hierarchical Layer-Region Correspondence
Greta Tuckute, Jenelle Feather, Dana Boebinger, Josh H. McDermott, Massachusetts Institute of Technology, United States
P-2.21: Neural replay as context-driven memory reactivation
Zhenglong Zhou, Michael Kahana, Anna Schapiro, University of Pennsylvania, United States
P-2.22: Peripheral visual information halves attentional choice biases
Brenden Eum, Antonio Rangel, California Institute of Technology, United States; Stephanie Dolbier, University of California, Los Angeles, United States
P-2.23: Correcting the Hebbian Mistake: Toward a Fully Error-Driven Hippocampus
Yicong Zheng, Xiaonan Liu, Charan Ranganath, Randall O'Reilly, University of California, Davis, United States; Satoru Nishiyama, Kyoto University, Japan
P-2.24: Aligning human subjects with short acquisition-time fMRI training data
Alexis Thual, Stanislas Dehaene, CEA, France; Bertrand Thirion, Inria, France
P-2.25: Modeling Rhythm in Speech as in Music: Towards a Unified Cognitive Representation
Ruolan Li, Naomi Feldman, University of Maryland, United States; Thomas Schatz, Aix Marseille University & CNRS, France
P-2.26: Impact of XAI dose suggestions on the prescriptions of ICU doctors
Myura Nagendran, Anthony Gordon, Aldo Faisal, Imperial College London, United Kingdom
P-2.27: Models of processing complex spoken words: the naïve, the passive, and the predictive
Suhail Matar, Alec Marantz, New York University, United States
P-2.28: The aperiodic activity of LFPs from the human basal ganglia and thalamus show no knee and lower exponent compared to neocortex
Alan Bush, Vasileios Kokkinos, Mark Richardson, Massachusetts General Hospital, Harvard Medical School, United States; Jasmine Zou, Massachusetts Institute of Technology, United States; Witold Lipski, University of Pittsburgh, United States
P-2.29: A multi-level account of the hippocampus from behavior to neurons
Robert Mok, University of Cambridge, United Kingdom; Bradley Love, University College London, United Kingdom
P-2.30: Mesocortical Projections Support Encoding of Behaviorally Relevant Content in the Prefrontal Cortex
Sergei Bugrov, RENSSELAER POLYTECHNIC INSTITUTE, United States
P-2.31: Different Spectral Representations in Optimized Artificial Neural Networks and Brains
Richard Gerum, Joel Zylberberg, York University, Canada; Cassidy Pirlot, Alona Fyshe, University of Alberta, Canada
P-2.32: Reconstructing the cascade of language processing in the brain using the internal computations of transformer language models
Sreejan Kumar, Theodore Sumers, Ariel Goldstein, Uri Hasson, Kenneth Norman, Thomas Griffiths, Robert Hawkins, Samuel Nastase, Princeton University, United States; Takateru Yamakoshi, University of Tokyo, Japan
P-2.33: Modeling naturalistic face processing in humans with deep convolutional neural networks
Guo Jiahui, Ma Feilong, James V. Haxby, Dartmouth College, United States; Matteo Visconti di Oleggio Castello, University of California, Berkeley, United States; Samuel A. Nastase, Princeton University, United States; M. Ida Gobbini, Università di Bologna, Italy
P-2.34: The neurobiology of strategic competition
Yaoguang Jiang, Michael Platt, University of Pennsylvania, United States
P-2.35: Deep neural networks face a fundamental trade-off to explain human vision
IVAN FELIPE RODRIGUEZ RODRIGUEZ, Drew Linsley, Thomas Fel, Thomas Serre, Brown University, United States
P-2.36: Do We Need Deep Learning? Towards High-Performance Encoding Models of Visual Cortex Using Modules of Canonical Computations
Atlas Kazemian, Eric Elmoznino, Michael F. Bonner, Johns Hopkins University, United States
P-2.37: Training BigGAN on an ecologically motivated image dataset
Weronika Kłos, Katja Seeliger, Martin N. Hebart, Max Planck Institute for Cognitive and Brain Sciences, Germany; Piero Coronica, Max Planck Computing and Data Facility, Germany
P-2.38: Dynamical Models of Decision Confidence in Visual Perception: Implementation and Comparison
Sebastian Hellmann, Michael Zehetleitner, Manuel Rausch, Catholic University of Eichstätt-Ingolstadt, Germany
P-2.39: The Role of Episodic Memory in Stimulus-Action Association Learning
Soobin Hong, Aspen Yoo, Anne Collins, University of Berkeley, California, United States
P-2.40: Factorized convolution models for interpreting neuron-guided images synthesis
Binxu Wang, Carlos Ponce, Harvard Medical School, United States
P-2.41: Constrained representations of numerical magnitudes
Arthur Prat-Carrabin, Michael Woodford, Columbia University, United States
P-2.42: A Highly Selective Neural Response to Food in Human Visual Cortex Revealed by Hypothesis-Free Voxel Decomposition
Meenakshi Khosla, Apurva Ratan Murty, Elizabeth Mieczkowski, Nancy Kanwisher, Massachusetts Institute of Technology, United States
P-2.43: Deep Learning for Parameter Recovery from a Neural Mass Model of Perceptual Decision-Making
Emanuele Sicurella, Jiaxiang Zhang, Cardiff University, United Kingdom
P-2.44: A neural code for probabilities
Cédric Foucault, Tiffany Bounmy, Sébastien Demortain, Evelyn Eger, Meyniel Florent, NeuroSpin (Cognitive Neuroimaging Unit), France; Bertrand Thirion, NeuroSpin, France
P-2.45: Objects or Context? Learning From Temporal Regularities in Continuous Visual Experience With an Infant-inspired DNN
Cliona O'Doherty, Rhodri Cusack, Trinity College Dublin, Ireland
P-2.46: Optimizing deep learning for Magnetoencephalography (MEG): From sensory perception to sex prediction and brain fingerprinting
Arthur Dehgan, Karim Jerbi, Université de Montréal, MILA, Canada; Irina Rish, Mila, Canada
P-2.47: Similarity in evoked responses does not imply similarity in macroscopic network states across tasks
Javier Rasero, Amy Sentis, Timothy Verstynen, Carnegie Mellon University, United States; Richard Betzel, Indiana University Bloomington, United States; Thomas Kraynak, Peter Gianaros, University of Pittsburgh, United States
P-2.48: Taxonomizing the Computational Demands of Videos Games for Deep Reinforcement Learning Agents
Lakshmi Narasimhan Govindarajan, Rex Liu, Alekh Ashok, Max Reuter, Michael Frank, Drew Linsley, Thomas Serre, Brown University, United States
P-2.49: The Representational Manifold
Manolo Martínez, Universitat de Barcelona, Spain
P-2.50: Heterogeneity in strategy use during arbitration between observational and experiential learning
Caroline Charpentier, Seokyoung Min, John O'Doherty, California Institute of Technology, United States
P-2.51: Spatially-embedded Recurrent Neural Networks: Bridging common structural and functional findings in neuroscience, including small-worldness, functional clustering in space and mixed selectivity
Jascha Achterberg, Danyal Akarca, Duncan Astle, John Duncan, University of Cambridge, United Kingdom; Daniel Strouse, Matthew Botvinick, DeepMind, United Kingdom
P-2.52: How Composite Prior and Noise Shape Multisensory Integration
Xiangyu Ma, He Wang, K. Y. Michael Wong, Hong Kong University of Science and Technology,, China; Wen-Hao Zhang, UT Southwestern Medical Center, United States
P-2.53: Model metamers illuminate divergences between biological and artificial neural networks
Jenelle Feather, Guillaume Leclerc, Aleksander Ma ̨dry, Josh H McDermott, Massachusetts Institute of Technology, United States
P-2.54: A Connectome-based Predictive Model of Affective Experience During Naturalistic Viewing
Jin Ke, Yuan Chang Leong, The University of Chicago, United States
P-2.55: A Cellular-Level Account of Classical Conditioning
Pantelis Vafidis, Antonio Rangel, California Institute of Technology, United States
P-2.56: Mapping the representation of social information across cortex
Christine Tseng, Storm Slivkoff, Jack Gallant, UC Berkeley, United States
P-2.57: Computational Parametric Mapping: A Method For Mapping Cognitive Models Onto Neuroimaging Data
Simon Steinkamp, David Meder, Oliver Hulme, Copenhagen University Hospital - Amager and Hvidovre, Denmark; Iyadh Chaker, Carthage University, National Institute of Applied Science and Technology, Tunisia; Félix Hubert, University of Geneva, Switzerland
P-2.58: Subtractive prediction error is encoded in the human auditory midbrain
Alejandro Tabas, Sandeep Kaur, Heike Sönnichsen, Katharina von Kriegstein, Technische Universität Dresden, Germany
P-2.59: Simulated voxels from the tuned inhibition model of perceptual metacognition to drive model validation via fMRI
Shaida Abachi, Brian Maniscalco, Megan Peters, Univeristy of California, Irvine, United States
P-2.60: Attention based neural networks display human-like one-shot perceptual learning effects
Xujin Liu, Yao Jiang, Mustafa Nasir-Moin, Ayaka Hachisuka, Jonathan Shor, Yao Wang, Biyu He, Eric Oermann, New York University, United States
P-2.61: Interpretable neural network models of visual cortex - A scattering transform approach
Donald Shi Pui Li, Michael F. Bonner, Johns Hopkins University, United States
P-2.62: Deriving Loss Functions for Regression and Classification from Humans
Hansol Ryu, University of Calgary, Canada; Manoj Srinivasan, The Ohio State University, United States
P-2.63: Neural Correlates of Model-Based Generalization
Lukas Neugebauer, Christian Büchel, University Medical Center Hamburg-Eppendorf, Germany
P-2.64: VOneCAE: Interpreting through the eyes of V1
Subhrasankar Chatterjee, Debasis Samanta, IIT Kharagpur, India
P-2.65: Scaling up the Evaluation of Recurrent Neural Network Models for Cognitive Neuroscience
Nathan Cloos, Guangyu Robert Yang, Christopher J. Cueva, Massachusetts Institute of Technology, Belgium; Moufan Li, Tsinghua University, China
P-2.66: Representation learning facilitates different levels of generalization
Fabian M. Renz, Shany Grossman, Nicolas W. Schuck, Max Planck Research Group NeuroCode, Germany; Peter Dayan, Max Planck Institute for Biological Cybernetics, Germany; Christian Doeller, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
P-2.67: Disentangled Face Representations in Deep Generative Models and the Human Brain
Paul Soulos, Leyla Isik, Johns Hopkins University, United States
P-2.68: Modeling Human Eye Movements with Neural Networks in a Maze-Solving Task
Jason Li, Nicholas Watters, Hansem Sohn, Mehrdad Jazayeri, Massachusetts Institute of Technology, United States
P-2.69: Extracting task-relevant low dimensional representations under data sparsity
Seyedmehdi Orouji, Megan Peters, University of California Irvine, United States
P-2.70: Flying Objects: Challenging humans and machines in dynamic object vision
Benjamin Peters, Matthew Retchin, Nikolaus Kriegeskorte, Columbia University, United States
P-2.71: Information coding in frontoparietal regions reflects individual differences in uncertainty-driven choices
Alexander Paunov, Maëva L’Hôtellier, Florent Meyniel, NeuroSpin Center, CEA Paris-Saclay, France; Dalin Guo, Zoe He, Angela Yu, University of California San Diego, United States
P-2.72: Brain-optimized models reveal increase in few-shot concept learning accuracy across human visual cortex
Ghislain St-Yves, Kendrick Kay, Thomas Naselaris, University of Minneapolis, United States
P-2.73: An fMRI account of non-optic sight in blindness
Jesse Breedlove, Logan Dowdle, Cheryl Olman, Thomas Naselaris, University of Minnesota, United States; Tom Jhou, Medical University of South Carolina, United States
P-2.74: Quantitative comparison of imagery and perception
Tiasha Saha Roy, Jesse Breedlove, Ghislain St-Yves, Kendrick Kay, Thomas Naselaris, University of Minnesota, United States