Man pages for multiRL
Reinforcement Learning Tools for Multi-Armed Bandit

algorithmAlgorithm Packages (MLE, MAP)
behruleBehavior Rules
colnamesColumn Names
controlControls of Estimation Methods
dataDataset Structure
engine_ABCThe Engine of Approximate Bayesian Computation (ABC)
engine_RNNThe Engine of Recurrent Neural Network (RNN)
estimateEstimate Methods
estimate_0_ENVTool for Generating an Environment for Models
estimate_1_LBILikelihood-Based Inference (LBI)
estimate_1_MAPEstimation Method: Maximum A Posteriori (MAP)
estimate_1_MLEEstimation Method: Maximum Likelihood Estimation (MLE)
estimate_2_ABCEstimation Method: Approximate Bayesian Computation (ABC)
estimate_2_RNNEstimation Method: Recurrent Neural Network (RNN)
estimate_2_SBISimulated-Based Inference (SBI)
estimation_methodsEstimate Methods
fit_pStep 3: Optimizing parameters to fit real data
func_alphaFunction: Learning Rate
func_betaFunction: Probability
func_deltaFunction: Bias
func_epsilonFunction: Exploration or Exploitation
func_gammaFunction: Utility
funcsCore Functions
func_zetaFunction: Decay Rate
layerLayers and Loss Functions (RNN)
MABSimulated Multi-Arm Bandit Dataset
multiRL-packagemultiRL: Reinforcement Learning Tools for Multi-Armed Bandit
paramsModel Parameters
plot.multiRL.replayplot.multiRL.replay
policyPolicy of Agent
priorsDensity and Random Function
process_1_inputmultiRL.input
process_2_behrulemultiRL.behrule
process_3_recordmultiRL.record
process_4_output_cppmultiRL.output
process_4_output_rmultiRL.output
process_5_metricmultiRL.metric
rcv_dStep 2: Generating fake data for parameter and model recovery
reductionDimension Reduction Methods (ABC)
rpl_eStep 4: Replaying the experiment with optimal parameters
RSTDRisk Sensitive Model
run_mStep 1: Building reinforcement learning model
settingsSettings of Model
summary-multiRL.model-methodsummary
systemCognitive Processing System
TABGroup 2 from Mason et al. (2024)
TDTemporal Differences Model
UtilityUtility Model
multiRL documentation built on March 31, 2026, 5:06 p.m.