Using Deep Learning tools for fitting Reinforcement Learning Models
Milena Rmus, Jimmy Xia, Jasmine Collins, Anne Collins, UC Berkeley, United States
Session:
Posters 1 Poster
Location:
Pacific Ballroom H-O
Presentation Time:
Thu, 25 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Abstract:
Computational cognitive modeling has advanced our understanding of learning and decision-making. However, the set of models we use is often limited by technical constraints, such as feasibility of model-fitting. Most modeling methods require computing the likelihood of the data under the model (e.g. finding parameters that maximize it). However, many computational models have intractable likelihoods, and workarounds designed for this problem only work on a small subset of models with specific assumptions. To address this issue, we tested a method using deep learning tools to estimate model parameters without estimating intractable likelihoods. Our results show that we can adequately recover parameters using this end-to-end approach. Our work contributes an important new tool to the ongoing development of computational techniques that will enable researchers to consider a broader set of models and develop better theories of complex human cognition.