Artificial Intelligence Testbed

A core goal of science is to produce predictive quantitative models. Recent work in artificial intelligence provides the opportunity to begin building predictive “pixels-to-actions” models of newborn organisms. Specifically, researchers have started building autonomous artificial agents that can learn to interact successfully with their environment, receiving only rewards and pixels as input (akin to the rewards and retinal images received by newborn animals). Importantly, these agents are formalized learning mechanisms in action. Thus, each agent can be thought of as a quantitative model for predicting what a newborn will learn given a specific set of experiences. If an artificial agent learns like a newborn animal, then the agent and animal should develop the same abilities when reared in the same environment.

One of our main goals is to fortify this link between developmental psychology and artificial intelligence, by creating a testbed for comparing the learning abilities of newborn animals and autonomous artificial agents. This testbed will allow researchers to raise autonomous artificial agents in virtual controlled-rearing experiments that parallel our controlled-rearing experiments, and then test whether their agents develop similar abilities as newborn animals when provided with the same set of experiences.


For example, to explore the origins of object perception, we can rear newborn chicks (left) and autonomous artificial agents (right) in the same environment. This environment contains a single object moving on a single background.



We can then measure whether the chicks (left) and agents (right) developed the same object recognition abilities from that input by testing them with the same set of test trials. In this test trial, the object is shown both on a novel background and from a novel viewpoint range. Thus, to succeed, the chicks and the agents would need to build abstract (background-invariant and view-invariant) object concepts.


Our testbed is inspired by DeepMind Lab (Beattie et al., 2016). As shown above, the testing platform provides the agent with access to the raw pixels (RGB values) and rewards (e.g., scalar value reflecting time spent with imprinted object) as input. The action space includes moving in three dimensions (forward/back, left/right, and jump/crouch) and looking around two axes (up/down and left/right).


The pictures above show sample images from the agent’s egocentric viewpoint from different positions in the virtual chamber. Thus, like newborn animals, these artificial agents must learn from raw sensory inputs, first-person views, and their own movements as they explore the environment.

Why Autonomous Artificial Agents?

Why use autonomous artificial agents, rather than more simple models of individual cognitive processes? First, a potentially large amount of intelligence is a direct consequence of the perceptual and physical characteristics of the environment in which the organism develops. To model cognitive development, it may therefore be necessary to raise autonomous artificial agents in perceptually and physically realistic environments (Hassabis et al., 2017). Second, research in artificial intelligence suggests that building artificial agents that can adapt to complex environments requires prioritizing autonomy, empowering artificial agents to learn their own internal models and how to use them (Botvinick et al., 2017). Third, as emphasized in the field of Developmental Robotics, a central goal in developmental science is to understand how systems made up of simple elements can operate with sufficient complexity to be autonomous agents (Cangelosi & Schlesinger, 2015). With autonomous artificial agents, we can model how developmental processes regulate the interaction between the agent and the environment, allowing for the emergence of complex cognitive capacities.

An Open-Source Testbed for Modeling the Origins of the Mind

We plan to make this testbed widely available to the scientific community, as an open-source platform. Our hope is that this testbed will become a resource for labs across developmental psychology, neuroscience, and artificial intelligence. Understanding the origins and basis of cognition is a key challenge for all of these fields, and high-precision studies with newborn animals are critical for establishing reliable benchmarks.

Characterizing the learning mechanisms in newborn brains is a complicated, high-dimensional problem. The search space of possible network architectures and learning algorithms is vast, so it may take many labs working in parallel to find neural architectures that learn like newborn animals.