Exploring the Plasticity-Stability Trade-Off in Spiking Neural Networks
Nicholas Soures, Dhireesha Kudithipudi, University of Texas at San Antonio, United States
Session:
Posters 1 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
The success of backpropagation in deep neural networks has inspired the development of analogous learning rules for spiking neural networks with comparable performance. However, a newer challenge in machine learning is sequential training on a family of tasks. In this scenario, known as continual learning, traditional models fail to learn new tasks without forgetting information pertinent to previous ones, a phenomenon called catastrophic forgetting. We demonstrated that bio-inspired mechanisms mimicking metaplasticity and consolidation can achieve near state-of-the-art performance among regularization-based models in continual learning. Importantly, our models did not require task information, learned from streaming data (each sample is seen only once), and their memory requirements did not grow with the number of tasks. However, the approach suffered from several limitations: metaplasticity significantly prevented future learning, the model relied on high firing activity, and was dependent on the number of samples in a task. To prevent the network from saturating at high metaplastic states, significantly reducing downstream learning, we study three variations of metaplastic learning rules: i) bidirectional metaplastic updates where the consolidation dynamics is tied to the metaplastic state, and ii) larger changes to the metaplastic variable when its value is small.