EMLeastSquaresClassifierSSLR: General Interface for EMLeastSquaresClassifier model

Description Usage Arguments References Examples

View source: R/EMLeastSquaresClassifier.R

Description

model from RSSL package

An Expectation Maximization like approach to Semi-Supervised Least Squares Classification

As studied in Krijthe & Loog (2016), minimizes the total loss of the labeled and unlabeled objects by finding the weight vector and labels that minimize the total loss. The algorithm proceeds similar to EM, by subsequently applying a weight update and a soft labeling of the unlabeled objects. This is repeated until convergence.

By default (method="block") the weights of the classifier are updated, after which the unknown labels are updated. method="simple" uses LBFGS to do this update simultaneously. Objective="responsibility" corresponds to the responsibility based, instead of the label based, objective function in Krijthe & Loog (2016), which is equivalent to hard-label self-learning.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
EMLeastSquaresClassifierSSLR(
  x_center = FALSE,
  scale = FALSE,
  verbose = FALSE,
  intercept = TRUE,
  lambda = 0,
  eps = 1e-09,
  y_scale = FALSE,
  alpha = 1,
  beta = 1,
  init = "supervised",
  method = "block",
  objective = "label",
  save_all = FALSE,
  max_iter = 1000
)

Arguments

x_center

logical; Should the features be centered?

scale

Should the features be normalized? (default: FALSE)

verbose

logical; Controls the verbosity of the output

intercept

logical; Whether an intercept should be included

lambda

numeric; L2 regularization parameter

eps

Stopping criterion for the minimization

y_scale

logical; whether the target vector should be centered

alpha

numeric; the mixture of the new responsibilities and the old in each iteration of the algorithm (default: 1)

beta

numeric; value between 0 and 1 that determines how much to move to the new solution from the old solution at each step of the block gradient descent

init

objective character; "random" for random initialization of labels, "supervised" to use supervised solution as initialization or a numeric vector with a coefficient vector to use to calculate the initialization

method

character; one of "block", for block gradient descent or "simple" for LBFGS optimization (default="block")

objective

character; "responsibility" for hard label self-learning or "label" for soft-label self-learning

save_all

logical; saves all classifiers trained during block gradient descent

max_iter

integer; maximum number of iterations

References

Krijthe, J.H. & Loog, M., 2016. Optimistic Semi-supervised Least Squares Classification. In International Conference on Pattern Recognition (To Appear).

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
library(tidyverse)
#' \donttest{
library(tidymodels)
library(caret)
library(SSLR)

data(breast)

set.seed(1)
train.index <- createDataPartition(breast$Class, p = .7, list = FALSE)
train <- breast[ train.index,]
test  <- breast[-train.index,]

cls <- which(colnames(breast) == "Class")

#% LABELED
labeled.index <- createDataPartition(breast$Class, p = .2, list = FALSE)
train[-labeled.index,cls] <- NA


m <- EMLeastSquaresClassifierSSLR() %>% fit(Class ~ ., data = train)

#Accuracy
predict(m,test) %>%
  bind_cols(test) %>%
  metrics(truth = "Class", estimate = .pred_class)

#Accesing model from RSSL
model <- m$model
#' }

SSLR documentation built on July 22, 2021, 9:08 a.m.