R/document_settings.R

#' @title Settings of Model
#' @name settings
#' @description 
#'  
#'  The \code{settings} argument is responsible for defining the model's name, 
#'    the estimation method, and other configurations. 
#'    
#' @section Class: 
#' \code{settings [List]} 
#'    
#' @section Slots: 
#' \itemize{
#'    \item \code{name [Character]} 
#'    
#'          The name of model.
#'          
#'    \item \code{mode [Character]} 
#'    
#'          There are two modes: \code{"fitting"} and \code{"simulating"}. 
#'          In most cases, users do not need to explicitly specify the value 
#'          of this slot, as the program will set it automatically. 
#'          
#'          Typically, the \code{"fitting"} mode is used when executing 
#'          \code{fit_p}, while the \code{"simulating"} mode is used when 
#'          executing \code{rcv_d}.
#'          
#'    \item \code{estimate [Character]} 
#'    
#'          The package supports four estimation methods: Maximum Likelihood 
#'          Estimation (MLE), Maximum A Posteriori Estimation (MAP), Approximate 
#'          Bayesian Computation (ABC), and Recurrent Neural Network (RNN). 
#'          Generally, users no longer need to specify the estimation 
#'          method in the settings object. This slot has been moved to an 
#'          argument within the main functions, \code{rcv_d} and \code{fit_p}. 
#'          For details, please refer to the documentation for 
#'          \link[multiRL]{estimate}.
#'          
#'    \item \code{policy [Character]} 
#'    
#'          The naming of this slot as policy is still under consideration. 
#'          
#'          Colloquially, \code{policy = "on"} means the agent selects an 
#'          option based on its estimated probability and then updates the 
#'          value of the chosen option. 
#'          
#'          Conversely, \code{policy = "off"} means the agent directly mimics 
#'          human behavior, solely using its estimated probability and the 
#'          human's choice to calculate the likelihood. 
#'          
#'          For details, please refer to the documentation for 
#'          \link[multiRL]{policy}.
#'          
#'    \item \code{system [Character]} 
#'    
#'          In decision-making paradigms, multiple systems may operate jointly 
#'          to influence human decisions. These systems can include a 
#'          reinforcement learning system, as well as working memory, and even 
#'          habitual choice tendencies.
#'          
#'          If \code{system = "RL"}, the learning process follows the 
#'          Rescorla-Wagner (RW) model using a learning rate less than 1, 
#'          representing a slow, incremental value update system. 
#'          
#'          If \code{system = "WM"}, the process still follows the 
#'          Rescorla-Wagner (RW) model but with a fixed learning rate of 1, 
#'          functioning as a pure memory system that immediately updates an 
#'          option's value. 
#'          
#'          If \code{system = c("RL", "WM")}, the agent maintains two distinct 
#'          Q-tables, one for reinforcement learning (RL) and one for working 
#'          memory (WM), during the decision-making process, integrating their 
#'          values based on the provided \code{weight} to determine the final 
#'          choice.
#'          
#'          For details, please refer to the documentation for 
#'          \link[multiRL]{system}.
#' }
#' 
#' @section Example: 
#' \preformatted{ # model settings
#'  settings = list(
#'    name = "TD",
#'    mode = "fitting",
#'    estimate = "MLE",
#'    policy = "off",
#'    system = "RL"
#'  )
#' }
#' 
NULL

Try the multiRL package in your browser

Any scripts or data that you put into this service are public.

multiRL documentation built on March 31, 2026, 5:06 p.m.