EM | R Documentation |
Use EM algorithm to maximize the marginal posterior. The marginal posterior is the probability of parameters given both labeled and unlabeled documents and the labels for the labeled documents
EM(
.D_train = NULL,
.C_train = NULL,
.D_test,
.n_class = 2,
.lambda = 0.1,
.max_iter = 100,
.alpha = 0.1,
.lazy_eval = F,
.counter_on = T,
.active_iter = NULL,
.maxactive_iter = NULL,
.fixed_words = NULL,
.supervise = T,
.class_prob = NULL,
.word_prob = NULL,
.export_all = F
)
.D_train |
document term matrix of the labeled documents |
.C_train |
vector of class labels for the labeled documents |
.D_test |
document term matrix of the unlabeled documents |
.n_class |
number of classes |
.lambda |
vector of document weights |
.max_iter |
maximum number of iteration of the EM algorithm |
.alpha |
the threshold of the convergence. If the increase of the maximand becomes less than alpha, the iteration stops. |
.lazy_eval |
boolean object. If |
.counter_on |
boolean object. If |
.active_iter |
integer value that tells the EM algorithm which iteration of the active loop it is in. |
.maxactive_iter |
integer value that tells the EM algorithm the maximum allowed active iterations. |
.fixed_words |
matrix of fixed words with class probabilities, where ncol is the number of classes. |
.supervise |
T if supervised. F is unsupervised. |
.class_prob |
required if .supervise == T. Starting value of class probability (logged) |
.word_prob |
required if .supervise == T. Starting value of word probability (logged) |
.export_all |
If T, model parameters from each iteration of the EM algorithm are returned. If F, only model results from the last iteration are returned. |
The inputs must conform to the following specifications D_train: a matrix with dimension: the number of labeled documents * the number of unique words D_test: a matrix with dimension: the number of labeled documents * the number of unique words The column length of D_train and D_test must be the same. The elements of the D_train, D_test are integers (the counts of each unique word appeard in each document) C_train: vector of labels for the labeled documents. The length must be the same as the row length of D_test
maximands is a vector of maximands in each iteration. Each element of the vector contains the log maximand in each step. pi is a vector of log class probabilities. (length = 2) eta is a matrix of log word probabilities (nrow = the number of all documents, ncol = 2)
Active EM in overleaf
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.