knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/README-",
  out.width = "100%"
)

A simple tutorial for modeling behavior from a classic reinforcement learning paradigm using the bandit2arm package

The goal of bandit2arm is to educate users on how to simulate behavioral data from the two-armed bandit task and, subsequently, take the simulated data and estimate the behavioral parameters.

Installation

You can install the released version of bandit2arm from CRAN with:

install.packages("bandit2arm")

And the development version from GitHub with:

# install.packages("devtools")
devtools::install_github("psuthaharan/bandit2arm")

Simulation

Simulate behavior:

library(bandit2arm)
# generate 100 individuals who completed the task
# dataset1 contains behavioral data keeping only those selected-choice trials
dataset1 <- simulate_bandit2arm(n_subj = 100, n_tr = 200, trials.unique = TRUE)

# dataset2 contains behavioral data keeping all trials (selected-choice and the non-selected-choice)
dataset2 <- simulate_bandit2arm(n_subj = 100, n_tr = 200, trials.unique = FALSE)

Visualization

Plot behavior:

# for plotting purpose - let's use dataset2
# You randomly select a participant to observe his/her behavior

# Participant 100
# View first 10 rows of data
head(dataset2$bandit2arm[[100]],10)


# Visualize behavior 
plot_bandit2arm(data = dataset2, subj = 100, colors = c("orange","purple"))

# Visualize behavior - animated 
plot_bandit2arm(data = dataset2, subj = 100, colors = c("orange","purple"), plot.type = "animate")

Estimation

Estimate behavioral parameters from simulated data:

# Run MLE
estimate_bandit2arm(data = dataset1, method = "mle", plot = TRUE)

# Run MAP
estimate_bandit2arm(data = dataset1, method = "map", plot = TRUE)

# Run EML
estimate_bandit2arm(data = dataset1, method = "eml", plot = TRUE)


psuthaharan/bandit2arm documentation built on Jan. 26, 2021, 1:36 a.m.