Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices

Standard

Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices. / Zhang, Lei; Lengersdorff, Lukas; Mikus, Nace; Gläscher, Jan; Lamm, Claus.

in: SOC COGN AFFECT NEUR, Jahrgang 15, Nr. 6, 30.07.2020, S. 695-707.

Publikationen: SCORING: Beitrag in Fachzeitschrift/ZeitungSCORING: ZeitschriftenaufsatzForschungBegutachtung

Harvard

APA

Vancouver

Bibtex

@article{50dde7ac6dda407599d2d3120b6502b3,
title = "Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices",
abstract = "The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.",
author = "Lei Zhang and Lukas Lengersdorff and Nace Mikus and Jan Gl{\"a}scher and Claus Lamm",
note = "{\textcopyright} The Author(s) 2020. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.",
year = "2020",
month = jul,
day = "30",
doi = "10.1093/scan/nsaa089",
language = "English",
volume = "15",
pages = "695--707",
journal = "SOC COGN AFFECT NEUR",
issn = "1749-5016",
publisher = "Oxford University Press",
number = "6",

}

RIS

TY - JOUR

T1 - Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices

AU - Zhang, Lei

AU - Lengersdorff, Lukas

AU - Mikus, Nace

AU - Gläscher, Jan

AU - Lamm, Claus

N1 - © The Author(s) 2020. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

PY - 2020/7/30

Y1 - 2020/7/30

N2 - The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.

AB - The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla-Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses.

U2 - 10.1093/scan/nsaa089

DO - 10.1093/scan/nsaa089

M3 - SCORING: Journal article

C2 - 32608484

VL - 15

SP - 695

EP - 707

JO - SOC COGN AFFECT NEUR

JF - SOC COGN AFFECT NEUR

SN - 1749-5016

IS - 6

ER -