Research Fields‎ > ‎

Reward coding

The neural code of reward anticipation in human orbitofrontal cortex.
Kahnt T, Heinzle J, Park SQ, Haynes JD.
Published in Proc Natl Acad Sci U S A. 2010 Mar 30;107(13):6010-5. Epub 2010 Mar 15.

An optimal choice among alternative behavioral options requires precise anticipatory representations of their possible outcomes. A fundamental question is how such anticipated outcomes are represented in the brain. Reward coding at the level of single cells in the orbitofrontal cortex (OFC) follows a more heterogeneous coding scheme than suggested by studies using functional MRI (fMRI) in humans. Using a combination of multivariate pattern classification and fMRI we show that the reward value of sensory cues can be decoded from distributed fMRI patterns in the OFC. This distributed representation is compatible with previous reports from animal electrophysiology that show that reward is encoded by different neural populations with opposing coding schemes. Importantly, the fMRI patterns representing specific values during anticipation are similar to those that emerge during the receipt of reward. Furthermore, we show that the degree of this coding similarity is related to subjects' ability to use value information to guide behavior. These findings narrow the gap between reward coding in humans and animals and corroborate the notion that value representations in OFC are independent of whether reward is anticipated or actually received.

Perceptual learning and decision-making in human medial frontal cortex.
Kahnt T, Grueschow M, Speck O, Haynes JD.
Published in Neuron. 2011 May 12;70(3):549-59.

The dominant view that perceptual learning is accompanied by changes in early sensory representations has recently been challenged. Here we tested the idea that perceptual learning can be accounted for by reinforcement learning involving changes in higher decision-making areas. We trained subjects on an orientation discrimination task involving feedback over 4 days, acquiring fMRI data on the first and last day. Behavioral improvements were well explained by a reinforcement learning model in which learning leads to enhanced readout of sensory information, thereby establishing noise-robust representations of decision variables. We find stimulus orientation encoded in early visual and higher cortical regions such as lateral parietal cortex and anterior cingulated cortex (ACC). However, only activity patterns in the ACC tracked changes in decision variables during learning. These results provide strong evidence for perceptual learning-related changes in higher order areas and suggest that perceptual and reward learning are based on a common neurobiological mechanism.

Decoding the formation of reward predictions across learning.
Kahnt T, Heinzle J, Park SQ, Haynes JD.
Published in J Neurosci. 2011 Oct 12;31(41):14624-30.

The predicted reward of different behavioral options plays an important role in guiding decisions. Previous research has identified reward predictions in prefrontal and striatal brain regions. Moreover, it has been shown that the neural representation of a predicted reward is similar to the neural representation of the actual reward outcome. However, it has remained unknown how these representations emerge over the course of learning and how they relate to decision making. Here, we sought to investigate learning of predicted reward representations using functional magnetic resonance imaging and multivariate pattern classification. Using a pavlovian conditioning procedure, human subjects learned multiple novel cue-outcome associations in each scanning run. We demonstrate that across learning activity patterns in the orbitofrontal cortex, the dorsolateral prefrontal cortex (DLPFC), and the dorsal striatum, coding the value of predicted rewards become similar to the patterns coding the value of actual reward outcomes. Furthermore, we provide evidence that predicted reward representations in the striatum precede those in prefrontal regions and that representations in the DLPFC are linked to subsequent value-based choices. Our results show that different brain regions represent outcome predictions by eliciting the neural representation of the actual outcome. Furthermore, they suggest that reward predictions in the DLPFC are directly related to value-based choices.

Neural responses to unattended products predict later consumer choices.
Tusche A, Bode S, Haynes JD.
J Neurosci. 2010 Jun 9;30(23):8024-31.


Imagine you are standing at a street with heavy traffic watching someone on the other side of the road. Do you think your brain is implicitly registering your willingness to buy any of the cars passing by outside your focus of attention? To address this question, we measured brain responses to consumer products (cars) in two experimental groups using functional magnetic resonance imaging. Participants in the first group (high attention) were instructed to closely attend to the products and to rate their attractiveness. Participants in the second group (low attention) were distracted from products and their attention was directed elsewhere. After scanning, participants were asked to state their willingness to buy each product. During the acquisition of neural data, participants were not aware that consumer choices regarding these cars would subsequently be required. Multivariate decoding was then applied to assess the choice-related predictive information encoded in the brain during product exposure in both conditions. Distributed activation patterns in the insula and the medial prefrontal cortex were found to reliably encode subsequent choices in both the high and the low attention group. Importantly, consumer choices could be predicted equally well in the low attention as in the high attention group. This suggests that neural evaluation of products and associated choice-related processing does not necessarily depend on attentional processing of available items. Overall, the present findings emphasize the potential of implicit, automatic processes in guiding even important and complex decisions.
Comments