| rpl_e | R Documentation |
Step 4: Replaying the experiment with optimal parameters
rpl_e(
result,
free_params = NULL,
data,
colnames,
behrule,
ids = NULL,
models,
funcs = NULL,
priors = NULL,
settings = NULL,
...
)
result |
Result from |
free_params |
In order to prevent ambiguity regarding the free parameters, their names can be explicitly defined by the user. |
data |
A data frame in which each row represents a single trial, see data |
colnames |
Column names in the data frame, see colnames |
behrule |
The agent's implicitly formed internal rule, see behrule |
ids |
The Subject ID of the participant whose data needs to be fitted. |
models |
Reinforcement Learning Models |
funcs |
The functions forming the reinforcement learning model, see funcs |
priors |
Prior probability density function of the free parameters, see priors |
settings |
Other model settings, see settings |
... |
Additional arguments passed to internal functions. |
An S3 object of class multiRL.replay.
A List containing, for each subject and each fitted model, the
estimated optimal parameters, along with the resulting
multiRL.model and multiRL.summary objects obtained by
replaying the model with those parameters.
# info
data = multiRL::TAB
colnames = list(
object = c("L_choice", "R_choice"),
reward = c("L_reward", "R_reward"),
action = "Sub_Choose"
)
behrule = list(
cue = c("A", "B", "C", "D"),
rsp = c("A", "B", "C", "D")
)
replay.recovery <- multiRL::rpl_e(
result = recovery.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("data", "funcs")
)
replay.fitting <- multiRL::rpl_e(
result = fitting.MLE,
data = data,
colnames = colnames,
behrule = behrule,
models = list(multiRL::TD, multiRL::RSTD, multiRL::Utility),
settings = list(name = c("TD", "RSTD", "Utility")),
omit = c("funcs")
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.