Our research aims to address the mystery of how our brains acquire richly structured knowledge about our environments, and how this knowledge helps us learn to predict and control reward.
The Gershman lab uses a combination of behavioral, neuroimaging and computational techniques to pursue these questions. One prong of our research focuses on how humans and animals discover the hidden states underlying their observations, and how they represent these states. In some cases, these states correspond to complex data structures, like graphs, grammars or programs. These data structures strongly constrain how agents infer which actions will lead to reward.
A second prong of our research is teasing apart the interactions between different learning systems. Evidence suggests the existence of at least two systems: a “goal-directed” system that builds an explicit model of the environment, and a “habitual” system that learns state-action response rules. Separate neural pathways that compete for control of behavior subserve these two systems, but the systems may also cooperate with one another.