Manipulating and Measuring Variation in DNN Representations
Jason Chow, Thomas Palmeri, Vanderbilt University, United States
Posters 1 Poster
Pacific Ballroom H-O
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
We explore how variation in DNN representations can be used to model individual differences in visual cognition. This involved manipulating how a large number of DNNs are created and measuring the resulting variation in network representations. Unclear about which similarity metric is best, we compared variants of RSA, CCA, and CKA on key benchmarks. We then manipulated DNN training along a continuum from randomization of initial weights and order of training images to variation in the distribution of training images. For a baseline, we measured variation in representations caused by image augmentation. For early layers of DNNs, all sources of variation produced variation in representations smaller than those caused by image augmentation. For later layers, variation in training image frequency produced variation in network representations comparable to that produced by variation in initial weights or training image order; variation in category frequency produced substantially more variation in network representations. These results suggest that network-level differences at the magnitude of training category distribution is a fruitful starting point to model individual differences in high-level visual cognition due to representational variability.