multiRL: Reinforcement Learning Tools for Multi-Armed Bandit

A flexible general-purpose toolbox for implementing Rescorla-Wagner models in multi-armed bandit tasks. As the successor and functional extension of the 'binaryRL' package, 'multiRL' modularizes the Markov Decision Process (MDP) into six core components. This framework enables users to construct custom models via intuitive if-else syntax and define latent learning rules for agents. For parameter estimation, it provides both likelihood-based inference (MLE and MAP) and simulation-based inference (ABC and RNN), with full support for parallel processing across subjects. The workflow is highly standardized, featuring four main functions that strictly follow the four-step protocol (and ten rules) proposed by Wilson & Collins (2019) <doi:10.7554/eLife.49547>. Beyond the three built-in models (TD, RSTD, and Utility), users can easily derive new variants by declaring which variables are treated as free parameters.

Package details

AuthorYuKi [aut, cre] (ORCID: <https://orcid.org/0009-0000-1378-1318>), Xinyu [aut] (ORCID: <https://orcid.org/0009-0004-4974-9191>)
MaintainerYuKi <hmz1969a@gmail.com>
LicenseGPL-3
Version0.3.7
URL https://yuki-961004.github.io/multiRL/
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("multiRL")

Try the multiRL package in your browser

Any scripts or data that you put into this service are public.

multiRL documentation built on March 31, 2026, 5:06 p.m.