Description Usage Arguments Details Value Examples
Democratic Co-Learning is a semi-supervised learning algorithm with a
co-training style. This algorithm trains N classifiers with different learning schemes
defined in list gen.learners
. During the iterative process, the multiple classifiers
with different inductive biases label data for each other.
1 | democratic(learners, schemes = NULL)
|
learners |
List of models from parsnip package for training a supervised base classifier using a set of instances. This model need to have probability predictions |
schemes |
List of schemes (col x names in each learner). Default is null, it means that learner uses all x columns |
This method trains an ensemble of diverse classifiers. To promote the initial diversity
the classifiers must represent different learning schemes.
When x.inst is FALSE
all learners
defined must be able to learn a classifier
from the precomputed matrix in x
.
The iteration process of the algorithm ends when no changes occurs in
any model during a complete iteration.
The generation of the final hypothesis is
produced via a weigthed majority voting.
(When model fit) A list object of class "democratic" containing:
A vector with the confidence-weighted vote assigned to each classifier.
A list with the final N base classifiers trained using the enlarged labeled set.
List of N vectors of indexes related to the training instances
used per each classifier. These indexes are relative to the y
argument.
The indexes of all training instances used to
train the N models
. These indexes include the initial labeled instances
and the newly labeled instances. These indexes are relative to the y
argument.
List of three vectors with the same information in model.index
but the indexes are relative to instances.index
vector.
The levels of y
factor.
The functions provided in the preds
argument.
The set of lists provided in the preds.pars
argument.
The value provided in the x.inst
argument.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 | library(tidyverse)
library(tidymodels)
library(caret)
library(SSLR)
data(wine)
set.seed(1)
train.index <- createDataPartition(wine$Wine, p = .7, list = FALSE)
train <- wine[ train.index,]
test <- wine[-train.index,]
cls <- which(colnames(wine) == "Wine")
#% LABELED
labeled.index <- createDataPartition(wine$Wine, p = .2, list = FALSE)
train[-labeled.index,cls] <- NA
#We need a model with probability predictions from parsnip
#https://tidymodels.github.io/parsnip/articles/articles/Models.html
#It should be with mode = classification
rf <- rand_forest(trees = 100, mode = "classification") %>%
set_engine("randomForest")
bt <- boost_tree(trees = 100, mode = "classification") %>%
set_engine("C5.0")
m <- democratic(learners = list(rf,bt)) %>% fit(Wine ~ ., data = train)
#' \donttest{
#Accuracy
predict(m,test) %>%
bind_cols(test) %>%
metrics(truth = "Wine", estimate = .pred_class)
#With schemes
set.seed(1)
m <- democratic(learners = list(rf,bt),
schemes = list(c("Malic.Acid","Ash"), c("Magnesium","Proline")) ) %>%
fit(Wine ~ ., data = train)
#Accuracy
predict(m,test) %>%
bind_cols(test) %>%
metrics(truth = "Wine", estimate = .pred_class)
#'}
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.