ebm | R Documentation |
ebm()
fits an exemplar-based model.
gcm()
fits a generalized context model (aka. exemplar model) for discrete responses (Medin & Schaffer, 1978; Nosofsky, 1986)
ebm_j()
fits an exemplar-based judgment model for continuous responses (Juslin et al., 2003)
gcm( formula, class, data, choicerule, fix = NULL, options = NULL, similarity = "minkowski", ... ) ebm_j( formula, criterion, data, fix = NULL, options = NULL, similarity = "minkowski", ... ) mem(formula, criterion, data, choicerule, options = NULL, ...) ebm(formula, criterion, data, mode, fix = NULL, options = NULL, ...)
formula |
A formula, the variables in |
class |
A formula, the variable in |
data |
A data frame, the data to be modeled. |
choicerule |
A string, the choice rule. Allowed values, see |
fix |
(optional) A list with parameter-value pairs of fixed parameters. If missing all free parameters are estimated. If set to
|
options |
(optional) A list, list entries change the modeling procedure. For example, |
similarity |
(optional) A string, similarity function, currently only |
... |
other arguments, ignored. |
criterion |
A formula, the variable in |
mode |
(optional) A string, the response mode, can be |
discount |
A number, how many initial trials to not use during parameter fitting. |
The model can predict new data - predict(m, newdata = ...)
- and this is how it works:
If newdata
s criterion
or class
variable has only NA
s, the model predicts using the originally supplied data
as exemplar-memory. Parameters are not re-fit.
If newdata
's' criterion
or class
variable has values other than NA
, the model predicts the first row in newdata
using the originally-supplied data
as exemplars in memory, but predictions for subsequent rows of newdata
use also the criterion values in new data. In other words, exemplar memory is extended by exemplars in new data for which a criterion exists. Parameters are not re-fit.
The model has the following free parameters, depending on the model specification (see npar()
). A model with formula ~ x1 + x2
has parameters:
x1, x2
(dynamic names) are attention weights, their names correspond to the right side of formula
.
lambda
is the sensitivity, larger values make the similarity decrease more steeply with higher distance metric.
r
is the order of the Minkowski distance metric (2 is an Euclidean metric, 1 is a city-block metric).
q
is the shape of the relation between similarity and distance, usually equal to r
.
In gcm()
:
b0, b1
(dynamic names) is the bias towards categories, their names are b
plus the unique values of class
. For example b0
is the bias for class = 0.
If choicerule = "softmax"
: tau
is the temperature or choice softness, higher values cause more equiprobable choices. If choicerule = "epsilon"
: eps
is the error proportion, higher values cause more errors from maximizing.
Regarding NA
values in class
or criterion
: The model takes NA
values in the class/criterion variable as trials without feedback, in which a stimulus was shown but no feedback about the class or criterion was given (partial feedback paradigm). The model predicts the class or criterion for such trials without feedback based on the previous exemplar(s) for which feedback was shown. The model ignores the trials without feedback in the prediction of the subsequent trials.
Returns a cognitive model object, which is an object of class cm. A model, that has been assigned to m
, can be summarized with summary(m)
or anova(m)
. The parameter space can be viewed using pa. rspace(m)
, constraints can be viewed using constraints(m)
.
Jana B. Jarecki, jj@janajarecki.com
Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85, 207-238. http://dx.doi.org/10.1037//0033-295X.85.3.207
Nosofsky, R. M. (1986). Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115, 39-57. http://dx.doi.org/10.1037/0096-3445.115.1.39
Juslin, P., Olsson, H., & Olsson, A.-C. (2003). Exemplar effects in categorization and multiple-cue judgment. Journal of Experimental Psychology: General, 132, 133-156. http://dx.doi.org/10.1037/0096-3445.132.1.133
Other cognitive models:
baseline_const_c()
,
bayes()
,
choicerules
,
cpt
,
hm1988()
,
shift()
,
shortfall
,
threshold()
,
utility
# Make some fake data D <- data.frame(f1 = c(0,0,1,1,2,2,0,1,2), # feature 1 f2 = c(0,1,2,0,1,2,0,1,2), # feature 2 cl = c(0,1,0,0,1,0,NA,NA,NA), # criterion/class y = c(0,0,0,1,1,1,0,1,1)) # participant's responses M <- gcm(y ~ f1+f2, class= ~cl, D, fix="start", choicerule = "none") # GCM, par. fixed to start val. predict(M) # predict 'pred_f', pr(cl=1 | features, trial) M$predict() # -- (same) -- summary(M) # summary anova(M) # anova-like table logLik(M) # Log likelihood M$logLik() # -- (same) -- M$MSE() # mean-squared error M$npar() # 7 parameters M$get_par() # parameter values M$coef() # 0 free parameters ### Specify models # ------------------------------- gcm(y ~ f1 + f2, class = ~cl, D, choicerule = "none") # GCM (has bias parameter) ebm(y~f1+f2, criterion=~cl, D, mode="discrete", choicerule = "none") # -- (same) -- ebm_j(y ~ f1 + f2, criterion = ~cl, D) # Judgment EBM (no bias par.) ebm(y~f1+f2, criterion=~cl, D, mode="continuous") # -- (same) -- ### Specify parameter estimation # ------------------------------- gcm(y~f1+f2, ~cl, D, fix=list(b0=0.5, b1=0.5), choicerule = "none") # fix 'bias' par. to 0.5, fit 5 par gcm(y~f1+f2, ~cl, D, fix=list(f1=0.9,f2=0.1), choicerule = "none") # fix attention 'f1' to 90 % f1 & fit 5 par gcm(y~f1+f2, ~cl, D, fix=list(q=2, r=2), choicerule = "none") # fix 'q', 'q' to 2 & fit 5 par gcm(y~f1+f2, ~cl, D, fix=list(q=1, r=1), choicerule = "none") # fix 'q', 'r' to 1 & fit 5 par gcm(y~f1+f2, ~cl, D, fix=list(lambda=2), choicerule = "none") # fix 'lambda' to 2 & fit 6 par gcm(y~f1+f2, ~cl, D, fix="start", choicerule = "none") # fix all par to start val.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.