How Well Do Contrastive Learning Algorithms Model Human Real-time and Life-long Learning?
Chengxu Zhuang, Violet Xiang, Daniel Yamins, Stanford University, United States; Yoon Bai, James DiCarlo, MIT, United States; Xiaoxuan Jia, Allen Institute, United States
Session:
Posters 3 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Sat, 27 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Recent progress in unsupervised learning using contrastive learning objectives has helped unsupervised deep neural networks achieve state-of-the-art performance on both visual tasks and adult neural predictivity, in turn opening the possibility that contrastive training processes themselves might model real visual learning. In this work, we evaluate the effectiveness of these algorithms in modeling human learning at two complementary timescales -- real-time learning over short periods, and continual learning over the longer term. The real-time learning evaluation compares the change of visual categorization behaviors of humans and models, and the life-long learning benchmark compares the performance of these models trained with human-like curriculum. Testing multiple high-performing algorithms, we observe that at both time scales, algorithms explicitly leveraging negative samples, a way of actively comparing current example to memorized examples, significantly outperform the algorithms that eschew the use of negative samples -- a striking contrast to the relative performance of these algorithms on standard off-line ImageNet benchmarks. Through analysis, we further show that using negative samples significantly helps the learning in low-diversity environments, which is naturally the case for both benchmarks. Our proposed benchmarks and conducted analysis quantitatively expose an open problem space for improved unsupervised learning algorithms that are more human-like.