| bayes | R Documentation |
bayes() fits a Bayesian cognitive model, updating beliefs about the probability of discrete event outcomes based on the frequencies of outcomes.
bayes_beta_c() fits a model for 2 outcomes (beta-binomial) for continuous responses
bayes_beta_d() fits a model for 2 outcomes (beta-binomial) for discrete responses
bayes_dirichlet_c() fits a model for n > 2 outcomes (dirichlet-categorical/multinomial) for continuous responses
bayes_dirichlet_d() fits a model for n > 2 outcomes (dirichlet-categorical/multinomial) for discrete responses
bayes_beta_c(
formula,
data,
fix = NULL,
format = c("raw", "count", "cumulative"),
prior_sum = NULL,
...
)
bayes_beta_d(formula, data, fix = NULL, format = NULL, prior_sum = NULL, ...)
bayes_dirichlet_d(
formula,
data,
fix = NULL,
format = NULL,
prior_sum = NULL,
...
)
bayes_dirichlet_c(
formula,
data,
fix = NULL,
format = NULL,
prior_sum = NULL,
...
)
bayes(
formula,
data = data.frame(),
fix = list(),
format = c("raw", "count", "cumulative"),
type = NULL,
discount = 0L,
options = list(),
prior_sum = NULL,
...
)
formula |
A formula, the variables in |
data |
A data frame, the data to be modeled. |
fix |
(optional) A list with parameter-value pairs of fixed parameters. If missing all free parameters are estimated. If set to
|
format |
(optional) A string, the format the data to be modeled, can be abbreviated, default is
|
prior_sum |
(optional) A number; the prior hyperparameter will be constrained to sum to this number; defaults to the number of prior parameters; if |
... |
other arguments, ignored. |
type |
(optional) A string, the type of inference, |
discount |
A number, how many initial trials to not use during parameter fitting. |
options |
(optional) A list, list entries change the modeling procedure. For example, |
The model models – as response – the belief about the occurrence of the first event in the formula as follows:
y ~ x1 models the beliefe about event x1 occurring versus it not occurring.
y ~ x1 + x2 models beliefs about x1 versus x2 occurring.
y ~ x1 + x2 + x3 models beliefs about x1, x2, and x3 occurring.
The model has n + 1 (n = number of events) free parameters, which are:
delta is the learning rate, it weights the observation during learning, value < 1 causes conservatism, > 1 causes liberal learning, and 1 is optimal Bayesian.
x1, x2 (dynamic names) are the prior parameter, their names correspond to the right side of formula. Also known as the hyperparameter of the prior belief distribution before trial 1. If they are constrainted to sum to n and n - 1 parameter are estimated.
In bayes_beta_d() or bayes_dirichlet_d(): If choicerule = "softmax": tau is the temperature or choice softness, higher values cause more equiprobable choices. If choicerule = "epsilon": eps is the error proportion, higher values cause more errors from maximizing.
Returns a cognitive model object, which is an object of class cm. A model, that has been assigned to m, can be summarized with summary(m) or anova(m). The parameter space can be viewed using pa. rspace(m), constraints can be viewed using constraints(m).
Markus Steiner
Jana B. Jarecki, jj@janajarecki.com
Griffiths, T. L., & Yuille, A. (2008). Technical Introduction: A primer on probabilistic inference. In N. Chater & M. Oaksford (Eds.), The Probabilistic Mind: Prospects for Bayesian Cognitive Science (pp. 1 - 2). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199216093.003.0002
Tauber, S., Navarro, D. J., Perfors, A., & Steyvers, M. (2017). Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory. Psychological Review, 124(4), 410 - 441. http://dx.doi.org/10.1037/rev0000052
Other cognitive models:
baseline_const_c(),
choicerules,
cpt,
ebm(),
hm1988(),
shift(),
shortfall,
threshold(),
utility
D <- data.frame(
a = c(0,0,1,1,1), # event A, e.g. coin toss "heads"
b = c(1,1,0,0,0), # event B, complement of A
y = c(0.5,0.3,0.2,0.3,0.5)) # participants' beliefs about A
M <- bayes_beta_c(
formula = y ~ a + b,
data = D) # fit all parameters
predict(M) # predict posterior means
summary(M) # summarize model
parspace(M) # view parameter space
anova(M) # anova-like table
logLik(M) # loglikelihood
MSE(M) # mean-squared error
# Predictions ----------------------------------------------
predict(M, type = "mean") # posterior mean
predict(M, type = "max") # maximum posterior
predict(M, type = "sd") # posterior SD
predict(M, type = "posteriorpar") # posterior hyper-par.
predict(M, type = "draws", ndraws = 3) # --"-- 3 draws
# Fix parameter ---------------------------------------------
bayes_beta_c(~a+b, D, list(delta=1, priorpar=c(1, 1))) # delta=1, uniform prior
bayes_beta_c(~a+b, D, list(delta=1, a=1, b=1)) # -- (same) --
bayes_beta_c(~a+b, D, fix = "start") # fix to start values
# Parameter fitting ----------------------------------------
# Use a response variable, y, to which we fit parameter
bayes(y ~ a + b, D, fix = "start") # "start" fixes all par., fit none
bayes(y ~ a + b, D, fix = list(delta=1)) # fix delta, fit priors
bayes(y ~ a + b, D, fix = list(a=1, b=1)) # fix priors, fit delta
bayes(y ~ a + b, D, fix = list(delta=1, a=1)) # fix delta & prior on "a"
bayes(y ~ a + b, D, list(delta=1, b=1)) # fix delta & prior on "b"
### Parameter meanings
# ---------------------------------------
# delta parameter: the learning rate or evidence weight
bayes(y ~ a + b, D, c(delta = 0)) # 0 -> no learning
bayes(y ~ a + b, D, c(delta = 0.1)) # 0.1 -> slow learning
bayes(y ~ a + b, D, c(delta = 9)) # 9 -> fast learning
bayes(y ~ a + b, D, c(a=1.5, b=0.5)) # prior: a more likely
bayes(y ~ a + b, D, list(priorpar=c(1.5, 0.5))) # -- (same) --
bayes(y ~ a + b, D, c(a = 0.1, b=1.9)) # prior: b more likely
bayes(y ~ a + b, D, list(priorpar = c(0.1, 1.9))) # -- (same) --
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.