hBayesDM-package: Hierarchical Bayesian Modeling of Decision-Making Tasks

hBayesDM-packageR Documentation

Hierarchical Bayesian Modeling of Decision-Making Tasks

Description

Fit an array of decision-making tasks with computational models in a hierarchical Bayesian framework. Can perform hierarchical Bayesian analysis of various computational models with a single line of coding. Bolded tasks, followed by their respective models, are itemized below.

Bandit

2-Armed Bandit (Rescorla-Wagner (delta)) — bandit2arm_delta
4-Armed Bandit with fictive updating + reward/punishment sensitvity (Rescorla-Wagner (delta)) — bandit4arm_4par
4-Armed Bandit with fictive updating + reward/punishment sensitvity + lapse (Rescorla-Wagner (delta)) — bandit4arm_lapse

Bandit2

Kalman filter — bandit4arm2_kalman_filter

Cambridge Gambling Task

Cumulative Model — cgt_cm

Choice RT

Drift Diffusion Model — choiceRT_ddm
Drift Diffusion Model for a single subject — choiceRT_ddm_single
Linear Ballistic Accumulator (LBA) model — choiceRT_lba
Linear Ballistic Accumulator (LBA) model for a single subject — choiceRT_lba_single

Choice under Risk and Ambiguity

Exponential model — cra_exp
Linear model — cra_linear

Description-Based Decision Making

probability weight function — dbdm_prob_weight

Delay Discounting

Constant Sensitivity — dd_cs
Constant Sensitivity for a single subject — dd_cs_single
Exponential — dd_exp
Hyperbolic — dd_hyperbolic
Hyperbolic for a single subject — dd_hyperbolic_single

Orthogonalized Go/Nogo

RW + Noise — gng_m1
RW + Noise + Bias — gng_m2
RW + Noise + Bias + Pavlovian Bias — gng_m3
RW(modified) + Noise + Bias + Pavlovian Bias — gng_m4

Iowa Gambling

Outcome-Representation Learning — igt_orl
Prospect Valence Learning-DecayRI — igt_pvl_decay
Prospect Valence Learning-Delta — igt_pvl_delta
Value-Plus_Perseverance — igt_vpp

Peer influence task

OCU model — peer_ocu

Probabilistic Reversal Learning

Experience-Weighted Attraction — prl_ewa
Fictitious Update — prl_fictitious
Fictitious Update w/o alpha (indecision point) — prl_fictitious_woa
Fictitious Update and multiple blocks per subject — prl_fictitious_multipleB
Reward-Punishment — prl_rp
Reward-Punishment and multiple blocks per subject — prl_rp_multipleB
Fictitious Update with separate learning for Reward-Punishment — prl_fictitious_rp
Fictitious Update with separate learning for Reward-Punishment w/o alpha (indecision point) — prl_fictitious_rp_woa

Probabilistic Selection Task

Q-learning with two learning rates — pst_gainloss_Q

Risk Aversion

Prospect Theory (PT) — ra_prospect
PT without a loss aversion parameter — ra_noLA
PT without a risk aversion parameter — ra_noRA

Risky Decision Task

Happiness model — rdt_happiness

Two-Step task

Full model (7 parameters) — ts_par7
6 parameter model (without eligibility trace, lambda) — ts_par6
4 parameter model — ts_par4

Ultimatum Game

Ideal Bayesian Observer — ug_bayes
Rescorla-Wagner (delta) — ug_delta

Author(s)

Woo-Young Ahn wahn55@snu.ac.kr

Nathaniel Haines haines.175@osu.edu

Lei Zhang bnuzhanglei2008@gmail.com

References

Please cite as: Ahn, W.-Y., Haines, N., & Zhang, L. (2017). Revealing neuro-computational mechanisms of reinforcement learning and decision-making with the hBayesDM package. Computational Psychiatry. 1, 24-57. https://doi.org/10.1162/CPSY_a_00002

See Also

For tutorials and further readings, visit : http://rpubs.com/CCSL/hBayesDM.


hBayesDM documentation built on Sept. 23, 2022, 9:06 a.m.