TK-1: "Varieties of human-like AI" - Ida Mommenejad

Saturday, 27 August, 08:30 - 09:15 Pacific Time (UTC -8)
Location: Grand Ballroom A-C
Tutorial Keynote

Tutorial Materials:


Researchers across and within cognitive and computer sciences increasingly reference “human-like” or “human-level” artificial intelligence. The scope and use of these terms, however, are often inconsistent across different papers. By “human-like AI” some mean that an agent matches or exceeds an overall score compared to human scores on specific games or tasks (benchmark). Others more carefully control and measure accuracy, errors, and reaction times differences on a specific task or set of tasks. A stronger version of this group also requires the architecture of the agents to be compatible with, even predictive of, neuroscientific evidence about how humans solve the task. Finally, some require passing task-specific Turing Tests judged by human observers. The keynote will discuss varieties of human-like AI, including a series of our reinforcement learning studies in single-agent and multi-agent settings. We will discuss both behavioral and neuroscience approaches to assessing the success and failures of human-like AI, and the road ahead. During the tutorials, we will present NeuroNav, an open source python library in the style of AI Gym for neurally and cognitively plausible learning agents. If there is time, we will also work with human-like multi-agent deep RL, comparing hierarchical innovation by multi-agent DQN networks with collective behavior in humans (using SAPIENS).

Tutorial Plan:

This tutorial aims to enable participants to learn the practical basics of designing and running reinforcement learning experiments for cognitive and behavioral neuroscience. To do so, the tutorial will utilize the recently released open source Neuro-Nav python toolkit. Instead of needing to install the toolkit locally, participants will be able to directly follow along by running the examples using provided google colab notebooks from their web browsers without the need to install any additional software.

The tutorial will first cover the basics of the toolkit and of framing reinforcement learning problems. It will then walk through a few examples of reproducing existing cognitive neuroscience results in the literature using Neuro-Nav. Next it will cover using Neuro-Nav to design and run a simple novel study comparing the behavior of different reinforcement learning algorithms. The final section of the tutorial will involve extending research into the multi-agent domain, and the study of behavior which exists within larger social structures.

Ida Momennejad

Ida Momennejad

Microsoft Research