Attention based neural networks display human-like one-shot perceptual learning effects
Xujin Liu, Yao Jiang, Mustafa Nasir-Moin, Ayaka Hachisuka, Jonathan Shor, Yao Wang, Biyu He, Eric Oermann, New York University, United States
Session:
Posters 2 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Learning from single events and integrating prior knowledge into current decision-making, so-called one-shot learning, is a hallmark of human intelligence. One-shot learning is also a major area of investigation in machine learning, with state-of-the-art machine algorithms struggling to achieve human-level results across a multitude of tasks. We present the Mooney Image Task, a visual one-shot learning task adapted for performance by both human and computer subjects. Using the Mooney Image Task, we assess the one-shot learning capabilities of modern machine learning algorithms and human subjects with classification performance and psychophysics. We show that spatial convolutional neural networks (CNNs) augmented with temporal sequence models display variable one-shot learning capabilities with attention-based models achieving the highest performance and displaying human-like psychophysics. These results are one of the few assessments of both human and computer one-shot learning on a single task with both task performance and psychophysics and suggest that attention mechanisms have more human-like properties compared to other forms of temporal recurrence.