Different Spectral Representations in Optimized Artificial Neural Networks and Brains
Richard Gerum, Joel Zylberberg, York University, Canada; Cassidy Pirlot, Alona Fyshe, University of Alberta, Canada
Session:
Posters 2 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Recent studies suggest that artificial neural networks (ANNs) that match the spectral properties of the mammalian visual cortex ($1/n$ eigenspectrum of the covariance matrix) achieve higher robustness than those that do not. However, no previous work systematically explored how modifying the ANN's spectral properties affects performance. A systematic search over spectral regularizers, forcing the ANN's eigenspectrum to follow $1/n^\alpha$ power laws with different exponents $\alpha$ shows that larger powers (around 2--3) lead to better validation accuracy and adversarial robustness on dense networks. This surprising finding overturns the notion that the brain-like spectrum (corresponding to $\alpha \sim 1$) always optimizes ANN performance and/or robustness. For convolutional networks, the best $\alpha$ values depend on task complexity and evaluation metric: lower $\alpha$ values are optimal for a simple object recognition task (categorizing MNIST handwritten digits). For a more complex task (categorizing CIFAR-10 natural images), we find that lower $\alpha$ values optimize accuracy whereas higher $\alpha$ values optimize robustness. These results cast doubt on the notion that brain-like spectral properties ($\alpha \sim 1$) \emph{always} optimize ANN performance, and they demonstrate the potential for fine-tuned spectral regularizers to optimize a chosen design metric, i.e., accuracy and/or robustness.