How do newborns learn to see and understand the world? Although philosophers and psychologists have puzzled over the origins of the mind for centuries, two major barriers have hindered progress. First, since humans can't be raised in strictly controlled environments from birth, it has not been possible to examine how specific experiences shape the newborn mind. Second, we can’t test infants continuously for long periods of time, which has prevented precise measurement of how vision emerges and changes during development.
To overcome these two barriers, we can turn to controlled-rearing studies of domestic chicks. Since chickens and humans use homologous neural circuits to process sensory input, controlled-rearing studies of newborn chicks can inform our understanding of the development of vision in humans. Recently, our lab developed an automated, high-throughput method that allows newborn chicks to be raised from birth in strictly controlled virtual environments. Using image-based tracking software, we record all of the chicks' behavior (24 hours/day, 7 days/week), providing a complete digital record of each newborn's movements over their first weeks of life. As a result, we can measure early emerging perceptual and cognitive abilities with an unprecedented degree of precision. Our lab now contains 48 automated controlled-rearing chambers, allowing us to conduct a large number of experiments simultaneously. The animation below shows the controlled-rearing chambers in our lab and what the chambers look like from the chicks' perspective.
So far, our results indicate that newborn animals have high-level visual abilities at the onset of vision. For example, newborn chicks can build an abstract (viewpoint-invariant) representation of the first object they see in their life, and can recognize objects rapidly—within a fraction of a second. Newborn brains are therefore equipped with powerful visual processing machinery. However, we have also discovered that these abilities only develop correctly when newborns are raised in a specific type of world: a natural (slow and smooth) visual environment. Our working hypothesis is that slow and smooth visual input optimizes the newborn brain, allowing for the emergence of abstract mental models of objects.
One of our major goals is to develop a quantitatively accurate encoding model of newborn vision. To do so, we build biologically-inspired convolutional neural networks designed to reproduce the object recognition abilities of the newborn animals tested in our controlled-rearing experiments. These computational models are unsupervised deep learning systems. The data we gather from the newborn chicks provide a critical benchmark against which to compare our computational models. We give the models and the chicks the same visual input, then test them with the same object recognition tasks. To reliably predict newborn vision, the computational models should succeed where biological models succeed, and fail where biological models fail.
Watch the video below for a talk presented to MIT's Center for Brains, Minds, and Machines Seminar Series.
What are the origins of high-level vision: Is this ability hardwired by genes or learned during development? Although researchers have been wrestling with this question for over a century, progress has been hampered by two major limitations: (1) most newborn animals cannot be raised in controlled environments from birth, and (2) most newborn animals cannot be observed and tested for long periods of time. Thus, it has generally not been possible to characterize how specific visual inputs relate to specific cognitive outputs in the newborn brain.
To overcome these two limitations, I recently developed an automated, high-throughput controlled-rearing technique. This technique can be used to measure all of a newborn animal’s behavior (9 samples/second, 24 hours/day, 7 days/week) within strictly controlled virtual environments. In this talk, I will describe a series of controlled-rearing experiments that reveal how one high-level visual ability—invariant object recognition—emerges in the newborn brain. Further, I will show how these controlled-rearing data can be linked to models of visual cortex for characterizing the computations underlying newborn vision. More generally, I will argue that controlled rearing can serve as a critical tool for testing between different theories and models, both for developmental psychology and computational neuroscience.