Generalization of value in reinforcement learning by humans

Contributed by gewimmer

G. Elliott Wimmer, Nathaniel D. Daw and Daphna Shohamy
View ID Name Type
Field Value
AuthorsG. Elliott Wimmer, Nathaniel D. Daw and Daphna Shohamy
DescriptionGroup-level SPM for the contrasts: 1) Reward prediction error, 2) Reward prediction error due to generalization, 3) Choice probability Generalization of value in reinforcement learning by humans. Wimmer GE, Daw ND, Shohamy D. Eur J Neurosci. 2012 Apr;35(7):1092-104. Here, we used functional magnetic resonance imaging and computational model-based analyses to examine the joint contributions of the reinforcement learning and generalization. Humans performed a reinforcement learning task with added relational structure, modeled after tasks used to isolate hippocampal contributions to memory. We observed blood oxygen level-dependent (BOLD) activity related to learning in the striatum and also in the hippocampus. By comparing a basic reinforcement learning model to one augmented to allow feedback to generalize between correlated options, we tested whether choice behavior and BOLD activity were influenced by the opportunity to generalize across correlated options. Although such generalization goes beyond standard computational accounts of reinforcement learning and striatal BOLD, both choices and striatal BOLD activity were better explained by the augmented model. Consistent with the hypothesized role for the hippocampus in this generalization, functional connectivity between the ventral striatum and hippocampus was modulated, across participants, by the ability of the augmented model to capture participants' choice.
JournalEuropean Journal of Neuroscience
Contributors
DOI10.1111/j.1460-9568.2012.08017.x
Field Strength3.0
id2222
Add DateFeb. 15, 2017, 7:03 p.m.