Taxonomizing the Computational Demands of Videos Games for Deep Reinforcement Learning Agents
Lakshmi Narasimhan Govindarajan, Rex Liu, Alekh Ashok, Max Reuter, Michael Frank, Drew Linsley, Thomas Serre, Brown University, United States
Posters 2 Poster
Pacific Ballroom H-O
Fri, 26 Aug, 19:30 - 21:30 Pacific Time (UTC -8)
Humans learn by interacting with their environments and perceiving the outcomes of their actions. A landmark in artificial intelligence has been the development of deep reinforcement learning (dRL) models capable of doing the same in video games, rivaling humans by learning to perceive and behave directly from images. However, it remains unclear whether the successes of dRL models reflect advances in visual representation learning, the effectiveness of reinforcement learning algorithms at discovering better policies for decision-making, or both. To address this, we systematically modify visual (i.e., perception) or credit assignment (i.e., decision-making) challenges in the Procgen benchmark, an extensive suite of parameterized video games. We assess the performance of a canonical dRL agent equipped with a ResNet-18 perceptual system and the proximal policy optimization (PPO) decision-making system. We discover a computational taxonomy of Procgen games: our canonical agent struggles to learn visually challenging games more than games with demanding credit assignments, with one noticeable exception being games designed for hierarchical credit assignment. Motivated by these findings, we demonstrate that the most efficient way to develop agents that perform better on Procgen is to imbue them with biologically-inspired mechanisms that facilitate visual perception and ease the challenge of hierarchical credit assignment.