| fit_p | R Documentation |
Step 3: Optimizing parameters to fit real data
fit_p(
estimate,
data,
colnames,
behrule,
ids = NULL,
funcs = NULL,
priors = NULL,
settings = NULL,
models,
lowers,
uppers,
control,
...
)
estimate |
Estimate method that you want to use, see estimate |
data |
A data frame in which each row represents a single trial, see data |
colnames |
Column names in the data frame, see colnames |
behrule |
The agent's implicitly formed internal rule, see behrule |
ids |
The Subject ID of the participant whose data needs to be fitted. |
funcs |
The functions forming the reinforcement learning model, see funcs |
priors |
Prior probability density function of the free parameters, see priors |
settings |
Other model settings, see settings |
models |
Reinforcement Learning Models |
lowers |
Lower bound of free parameters in each model. |
uppers |
Upper bound of free parameters in each model. |
control |
Settings manage various aspects of the iterative process, see control |
... |
Additional arguments passed to internal functions. |
An S3 object of class multiRL.fitting.
A List containing, for each model, the estimated optimal parameters
and associated model fit metrics.
# fitting
fitting.MLE <- multiRL::fit_p(
estimate = "MLE",
data = multiRL::TAB,
colnames = list(
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose"
),
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
),
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
lowers = list(c(0, 0), c(0, 0, 0), c(0, 0, 0)),
uppers = list(c(1, 5), c(1, 1, 5), c(1, 5, 1)),
control = list(core = 10, iter = 100)
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.