This method is a member function of class "
that trains a recommender model. It will read from a training data source and
create a model file at the specified location. The model file contains
necessary information for prediction.
The common usage of this method is
1 2 3
Object returned by
An object of class "DataSource" that describes the source
of training data, typically returned by function
Path to the model file that will be created.
A number of parameters and options for the model training. See section Parameters and Options for details.
opts argument is a list that can supply any of the following parameters:
Character string, the loss function. Default is "l2", see below for details.
Integer, the number of latent factors. Default is 10.
Numeric, L1 regularization parameter for user factors. Default is 0.
Numeric, L2 regularization parameter for user factors. Default is 0.1.
Numeric, L1 regularization parameter for item factors. Default is 0.
Numeric, L2 regularization parameter for item factors. Default is 0.1.
Numeric, the learning rate, which can be thought of as the step size in gradient descent. Default is 0.1.
Integer, the number of iterations. Default is 20.
Integer, the number of threads for parallel computing. Default is 1.
Integer, the number of bins. Must be greater than
Default is 20.
Logical, whether to perform non-negative matrix factorization.
Logical, whether to show detailed information. Default is
loss option may take the following values:
For real-valued matrix factorization,
Squared error (L2-norm)
Absolute error (L1-norm)
For binary matrix factorization,
Squared hinge loss
For one-class matrix factorization,
Row-oriented pair-wise logarithmic loss
Column-oriented pair-wise logarithmic loss
Yixuan Qiu <http://statr.me>
W.-S. Chin, Y. Zhuang, Y.-C. Juan, and C.-J. Lin. A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems. ACM TIST, 2015.
W.-S. Chin, Y. Zhuang, Y.-C. Juan, and C.-J. Lin. A Learning-rate Schedule for Stochastic Gradient Methods to Matrix Factorization. PAKDD, 2015.
W.-S. Chin, B.-W. Yuan, M.-Y. Yang, Y. Zhuang, Y.-C. Juan, and C.-J. Lin. LIBMF: A Library for Parallel Matrix Factorization in Shared-memory Systems. Technical report, 2015.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
## Training model from a data file train_set = system.file("dat", "smalltrain.txt", package = "recosystem") r = Reco() set.seed(123) # This is a randomized algorithm r$train(data_file(train_set), opts = list(dim = 20, costp_l2 = 0.01, costq_l2 = 0.01, nthread = 1) ) ## Training model from data in memory train_df = read.table(train_set, sep = " ", header = FALSE) set.seed(123) r$train(data_memory(train_df[, 1], train_df[, 2], train_df[, 3]), opts = list(dim = 20, costp_l2 = 0.01, costq_l2 = 0.01, nthread = 1) )
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.