Samantha M. W. Wood and Justin N. Wood

"Invariant Object Recognition" refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition.

The three unique images of Object 1 and Object 2 presented to the chicks during the input phase. Four chicks were presented with Object 1 and six chicks were presented with Object 2. Object 2 served as the unfamiliar object for the chicks that were imprinted to Object 1, and vice versa. The three images changed at a rate of 1 image/second, causing the virtual object to rotate abruptly back and forth through a 30° viewpoint range. Chicks never observed the virtual object (or any other object) from another viewpoint during the input phase.

(Top) The test viewpoints shown during the test phase. The viewpoint range shown during the input phase is indicated by the blue boxes. (Bottom) Chicks’ average percentage of correct trials across the 27 viewpoint ranges. Chance performance was 50%. Each subject received seven test trials for each viewpoint range. Thus, each viewpoint cell in the figure reflects the data from 28 test trials for Object 1 (7 test trials × 4 subjects) and 42 test trials for Object 2 (7 test trials × 6 subjects), for a total of 1,890 test trials across all viewpoint ranges. The color scale reflects the full range of possible performance values (0–100%).