Carlos Ponce, MD, PhD
Assistant Professor of Neurobiology, Harvard Medical School
The Neuroscience of Visual Recognition and Perception

A main goal in our lab is to understand how the visual brain works in the natural world, when it is presented with rich and complex scenes, such as those we encounter in our daily lives.

To do this, we study the brain of the rhesus macaque, recording electrophysiological activity from neurons across the entire cortico-visual hierarchy, from V1, V2, V4, and inferotemporal cortex (IT), as well as prefrontal cortex. We use state-of-the-art neural networks both as stimulus generators (e.g., generative adversarial networks) and as models of the visual system (e.g., convolutional neural networks). These powerful, machine-intelligence models allow us to manipulate complex image patterns to identify information encoded by visual neurons. This encoded visual information is then contextualized using neural networks and monkey behavior. Specifically, we train animals in minimalist, ethology-informed tasks, such as free-viewing and preferential looking, in order to understand the origin and development of visual representations extracted from the brain via neural networks.

Because behavioral tasks are so important to us, our lab also works to improve techniques for animal training, including implementing computer-based automated systems, computer-tablet-based training at their home cage, and generally using ethological principles to design and titrate the relative difficulty of experimental tasks.

Solving the problem of visual recognition at the intersection of visual neuroscience and machine learning will yield applications that will improve automated visual recognition in fields like medical imaging, security and self-driving vehicles. But just as importantly, it will illuminate how our inner experience of the visual world comes to be.