Decoding the World at Your Fingertips

By Emilie Josephs

Imagine preparing a holiday meal with your family, or perhaps, this year, with the members of your pandemic bubble. As you stand at the kitchen counter, you will experience a view of a complex reachable environment, populated with objects like knives, vegetables, and glasses, meaningfully arranged on the countertop. From this rich visual information, you will be able to extract the identity of the objects, how to use them, their orientation relative to your body and to each other, whether they are reachable, and much more. How does the visual system support our understanding of these near-scale, reach-relevant environments (which we call “reachspaces”)?

a man cooking steak on a griddle while holding a glass of wine

This process starts when information enters the eye through the retina, and is interpreted by brain circuits specialized for decoding visual information. Different sub-regions of this cortex decode particular kinds of information. For example, views of individual objects and navigable-scale environments (aka “scenes”) activate different regions of cortex. This separation of function suggests that these two kinds of inputs are processed with different computations.

Reachspaces are intermediate to objects and scenes in terms of their scale. How are they represented in the brain? One possibility is that visual cortex uses the same computations it applies to objects and scenes to understand reachspaces. Alternatively, reachspaces, which differ from views of objects and scenes in their visual structure and behavioral implications, may require a different combination of computations to be understood.

To answer this question, we examined whether different regions of the brain are activated when viewing objects, reachspaces and scenes. Human participants saw images from these categories while brain responses were recorded using functional MRI. All three conditions elicited strong Blood-Oxygen Level Dependent (BOLD) activity throughout the occipital and parietal lobes of the brain. However, different regions preferred different conditions: we found preferential activity to objects in regions associated with object processing, and preferences for scenes in known scene-processing regions. Crucially, we also identified three novel regions that preferred reachspaces, suggesting that these views draw on partially different processes than scenes and objects. In a follow-up experiment, these regions were highly sensitive to the presence of multiple objects, suggesting that visual decoding of reachspaces may rely strongly on the collections of objects in the space.

Going forward, we hope to further explore these regions, including researching the precise visual content encoded there. Altogether, this project will help us understand the processing cascade that links the image on the retina to the high-level understanding of our reachable world.

Emilie Josephs is a graduate student in the lab of Talia Konkle at Harvard University.


Learn more in the original research article:
Josephs EL, Konkle T. Large-scale dissociations between views of objects, scenes, and reachable-scale environments in visual cortex. Proc Natl Acad Sci U S A. 2020;117(47):29354-29362. doi:10.1073/pnas.1912333117

News Types:  Community Stories