A constellation of functions that work with the catlearn package to facilitate the broad modeling goals of the Learning and Representation in Cognition Laboratory.
Package under active development
The package can be installed using these commands:
install.packages("devtools") devtools::install_github("ghonk/catlearn.suppls")
catlearn.suppls
is intended to be used as a supplement to the catlearn
package. It facilitates modeling with the DIVergent Autoencoder architecture (slpDIVA()
) and features a suite of functions under development for active research projects.
First load the packages.
library(catlearn) library(catlearn.suppls)
Now load some data from get_test_inputs
and assign variables for the properties of the data. The package includes a set of classic category structures; we'll use a 4-D family resemblance + unidimensional rule category structure for this demo.
# get some sample data test_inputs <- get_test_inputs('unifr') # set inputs and class labels ins <- test_inputs$ins labs <- test_inputs$labels # get number of categories and features nfeats <- dim(ins)[2] ncats <- length(unique(labs))
Next, setup the model parameters. The design pattern for catlearn
is called a stateful list processor. This means we provide the model with a list that contains all of the model's hyperparamters and it returns a similar list that includes the model's learned parameters (weights).
# construct a state list st <- list(learning_rate = 0.25, num_feats = nfeats, num_hids = 3, num_cats = ncats, beta_val = 0, phi = 1, continuous = FALSE, in_wts = NULL, out_wts = NULL, wts_range = 1, wts_center = 0, colskip = 4)
We then use the catlearn.suppls
package to create a training matrix, tr
.
# tr_init makes the empty training matrix tr <- tr_init(nfeats, ncats) # tr_add fills in the data and procedure (i.e., training, test, model reset) tr <- tr_add(inputs = ins, tr = tr, labels = labs, blocks = 12, ctrl = 0, shuffle = TRUE, reset = TRUE)
Finally, we run the model with our state list st
and training matrix tr
.
diva_model <- slpDIVA(st, tr)
To examine performance we can match the output (response probabilities) to the training matrix tr
.
# name the output columns colnames(diva_model$out) <- paste0('o', 1:dim(diva_model$out)[2]) # Combine the inputs and outputs (if desired) trn_result <- cbind(tr, round(diva_model$out, 4)) tail(trn_result)
Use response_probs
to extract the response probabilities for the target categories (for every training step (trial) or averaged across blocks)
response_probs(tr, diva_model$out, blocks = TRUE)
plot_training
is a simple function used to plot the learning of one or more models.
# # # run a few models model_list <- list(slpDIVA(st, tr), slpDIVA(st, tr), slpDIVA(st, tr)) # # # get response probabilities model_list_resp_probs <- lapply(model_list, function(x) { response_probs(tr, x$out, blocks = TRUE)}) # # # plot the leanrning curves plot_training(model_list_resp_probs)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.