A Unified Account of Adaptive Learning in Different Statistical Environments
Niloufar Razmi, Matthew Nassar, Brown University, United States
Posters 2 Poster
Pacific Ballroom H-O
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Humans adjust their learning rate rationally according to local environmental statistics and calibrate their learning rate based on the broader statistical context. To capture the broad range of dynamic learning behavior that humans show in response to changes in local environmental statistics, we introduce a generalized framework of adaptive learning based on dynamic state representation. We will first present our biologically plausible neural network model which shifts its internal context upon receiving supervised signals that are mismatched to its output, thereby changing the “state” to which feedback is associated. By introducing the state transitions, we have shown how either increase learning or decrease learning depending on the duration over which the new state is maintained. We extend on our previously published model by tackling the larger question as to how structure governing state transitions can be inferred from data. To do so, we develop a Bayesian model in which changepoint, oddball and reversal contexts can be inferred using a sticky variant of the Chinese restaurant process. We show that this model can learn how to behave appropriately in different environments by learning the underlying structure i.e., how states transition. Finally, we describe ongoing attempts to incorporate structure learning into our neural network model, thereby allowing it to develop appropriate adaptive learning strategies de novo.