Contextual Representation Ensembling
Tyler Tomita, Johns Hopkins University, United States
Session:
Posters 1 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Real-world agents must be able to efficiently acquire new skills over a lifetime, a process called ``continual learning.'' Current continual machine learning models fall short because they do not selectively and flexibly transfer prior knowledge to novel contexts. We propose a cognitively-inspired model called Contextual Representation Ensembling (CRE), which fills this gap. We compared CRE to other state-of-the-art continual machine learning models as well as other baseline models on a simulated continual learning experiment. CRE demonstrated superior transfer to novel contexts and superior remembering when old contexts are re-encountered. Our results suggest that, in order to achieve efficient continual learning in the real world, an agent must have two abilities: (i) they must be able to recognize context cues within the environment in order to infer what prior knowledge might be relevant to the current context and (ii) they must be able to flexibly recombine prior knowledge.