Found 14 images.
ID | Name | Collection(s) | Description |
---|---|---|---|
54055 | T_Cor_dPc.nii | Cross-Situational Learning Is Supported by Propose-but-Verify Hypothesis Testing | T statistic for the correlation between learning rate and BOLD; |
563792 | Midfrontal beta oscillations | Biased credit assignment in motivational learning biases arises through prefrontal influences on striatal learning | Parametric regressor: trial-by-trial frontal (t-weighted mean of channels Cz/ FCz/ Fz) beta oscillations (300–1,250 ms relative to outcome onset, 13 - 30 Hz) measured using scalp EEG. This beta response is significantly higher for positive outcomes (rewards/ no punishments) than negative outcomes (no rewards/ punishments). This regressor is added on top of GLM2 regressors (featuring 8 outcome x response regressors plus standard prediction errors and the difference term to biased prediction errors), yielding GLM3C. Motivational Go/NoGo task (Swart et al., 2017; 2018; van Nuland et al., 2020). |
563791 | Frontal theta/delta oscillations | Biased credit assignment in motivational learning biases arises through prefrontal influences on striatal learning | Parametric regressor: trial-by-trial frontal (wide ROI; t-weighted mean of channels AF3/ AF4/ AF7/ AF8/ F1/ F2/ F3/ F4/ F5/ F6/ F7/ F8/ FC1/ FC2/ FC3/ FC4/ FC5/ FC6/ FCz/ Fp1/ Fp2/ Fpz/ Fz) theta/delta oscillations (225–475 ms relative to outcome onset, 1 - 8 Hz) measured using scalp EEG. This theta response is significantly higher for negative outcomes (no rewards/ punishments) than positive outcomes (rewards/ no punishments). This regressor is added on top of GLM2 regressors (featuring 8 outcome x response regressors plus standard prediction errors and the difference term to biased prediction errors), yielding GLM3B. Motivational Go/NoGo task (Swart et al., 2017; 2018; van Nuland et al., 2020). |
54057 | T_Rsa_PbV.nii | Cross-Situational Learning Is Supported by Propose-but-Verify Hypothesis Testing | T statistic for the propose-but-verify RSA; |
42704 | Reward prediction error due to generalization | Generalization of value in reinforcement learning by humans | BOLD correlates of a model-derived generalization of reward prediction error across (yoked) bandits. |
42705 | Reward prediction error | Generalization of value in reinforcement learning by humans | BOLD correlates of the model-derived basic reward prediction error at feedback. |
43803 | Reward phase stimulus reward prediction error / stimulus value | Preference by Association: How Memory Mechanisms in the Hippocampus Bias Decisions | BOLD correlates of the model-derived basic stimulus value / stimulus reward prediction error at stimulus onset. |
42406 | Reward Prediction Error | Behavioural and neural characterization of optimistic reinforcement learning | Reward Prediction Error at outcome onset (positive correlation with RPE) |
43804 | Reward phase outcome reward prediction error | Preference by Association: How Memory Mechanisms in the Hippocampus Bias Decisions | BOLD correlates of the model-derived basic outcome reward prediction error at feedback. |
60316 | Correlations between DLPFC response to instructions and immediate reversal with instructions | Instructed knowledge shapes feedback-driven aversive learning in striatum and orbitofrontal cortex, but not the amygdala | Maps corresponding to Figure 6 & Supplement: Using robust regression, we isolated the regions whose instructed reversals correlate with the magnitude of the DLPFC response to instruction, across individuals. |
43846 | Reward prediction error during learning | Episodic Memory Encoding Interferes with Reward Learning and Decreases Striatal Prediction Errors | BOLD correlates of the model-derived basic reward prediction error at feedback, across the task with incidental pictures and task with no incidental pictures. |
550248 | Standard reward prediction errors | Biased credit assignment in motivational learning biases arises through prefrontal influences on striatal learning | Parametric regressor: reward prediction errors computed with a standard Rescorla Wagner model. This is GLM1. The GLM contains the following 10 regressors: 1-4) 4 regressors crossing the performed action (Go/ NoGo) with the valence of the cue (Win/ Avoid). At the time of cue onset. 5) Response hand: +1 for left and response, 0 for no response, -1 for right hand response. At the time of cue onset. 6) Incorrect response. At the time of responses. 7) Outcome Onset (any outcome). At the time of outcomes. 8) Standard reward prediction errors computed with the standard Rescorla-Wagner learning model. 9) Difference between biased and standard reward prediction errors, respectively computed with a) a Rescorla-Wagner models assuming an increased learning rate for rewarded Go responses and a decreased learning rate after punished NoGo responses, and b) a standard Rescorla-Wagner learning model. At the time of outcomes. 10) Invalid outcomes (non-instructed key pressed, returning error message). At the time of outcomes. This contrast just reflecting the standard reward prediction errors (regressor 8). |
550249 | Biased minus standard reward prediction errors | Biased credit assignment in motivational learning biases arises through prefrontal influences on striatal learning | Parametric regressor: reward prediction errors computed with a standard Rescorla Wagner model. This is GLM1. The GLM contains the following 10 regressors: 1-4) 4 regressors crossing the performed action (Go/ NoGo) with the valence of the cue (Win/ Avoid). At the time of cue onset. 5) Response hand: +1 for left and response, 0 for no response, -1 for right hand response. At the time of cue onset. 6) Incorrect response. At the time of responses. 7) Outcome Onset (any outcome). At the time of outcomes. 8) Standard reward prediction errors computed with the standard Rescorla-Wagner learning model. 9) Difference between biased and standard reward prediction errors, respectively computed with a) a Rescorla-Wagner models assuming an increased learning rate for rewarded Go responses and a decreased learning rate after punished NoGo responses, and b) a standard Rescorla-Wagner learning model. At the time of outcomes. 10) Invalid outcomes (non-instructed key pressed, returning error message). At the time of outcomes. This contrast just reflects the difference of biased minus standard reward prediction errors (regressor 9). A conjunction of this contrast and the standard reward prediction error contrast captures regions for which BOLD signal is significantly better explained by biased prediction errors compared to standard prediction errors (see approach in Wittmann et al., 2006; Daw et al., 2011). |
550239 | Outcome Valence | Biased credit assignment in motivational learning biases arises through prefrontal influences on striatal learning | Go actions (irrespective of left vs. right) minus Go actions at the time people receive an outcome. This is GLM2. The GLM contains the following 13 regressors: 1-8) 8 regressors crossing the performed action (Go/ NoGo) with the obtained outcome (reward/ no reward = neutral/ no punishment = neutral/ punishment). At the time of outcomes. 9) Left hand response. At the time of responses. 10) Right hand response. At the time of responses. 11) Incorrect response. At the time of responses. 12) Outcome Onset (any outcome). At the time of outcomes. 13) Invalid outcomes (non-instructed key pressed, returning error message). At the time of outcomes. This contrast is based on the first 8 regressors, taking all 4 positive outcome regressors (reward/ no punishment) minus all 4 negative outcome regressors (no reward/ punishment) (note that all regressors at the times of outcomes). Motivational Go/NoGo task (Swart et al., 2017; 2018; van Nuland et al., 2020). |