View source: R/splendid_model.R
splendid_model | R Documentation |
Train, predict, and evaluate classification models
splendid_model(
data,
class,
algorithms = NULL,
n = 1,
seed_boot = NULL,
seed_samp = NULL,
seed_alg = NULL,
convert = FALSE,
rfe = FALSE,
ova = FALSE,
standardize = FALSE,
sampling = c("none", "up", "down", "smote"),
stratify = FALSE,
plus = TRUE,
threshold = 0,
trees = 100,
tune = FALSE,
vi = FALSE
)
data |
data frame with rows as samples, columns as features |
class |
true/reference class vector used for supervised learning |
algorithms |
character vector of algorithms to use for supervised
learning. See Algorithms section for possible options. By default,
this argument is |
n |
number of bootstrap replicates to generate |
seed_boot |
random seed used for reproducibility in bootstrapping training sets for model generation |
seed_samp |
random seed used for reproducibility in subsampling training sets for model generation |
seed_alg |
random seed used for reproducibility when running algorithms with an intrinsic random element (random forests) |
convert |
logical; if |
rfe |
logical; if |
ova |
logical; if |
standardize |
logical; if |
sampling |
the default is "none", in which no subsampling is performed. Other options include "up" (Up-sampling the minority class), "down" (Down-sampling the majority class), and "smote" (synthetic points for the minority class and down-sampling the majority class). Subsampling is only applicable to the training set. |
stratify |
logical; if |
plus |
logical; if |
threshold |
a number between 0 and 1 indicating the lowest maximum class probability below which a sample will be unclassified. |
trees |
number of trees to use in "rf" or boosting iterations (trees) in "adaboost" |
tune |
logical; if |
vi |
logical; if |
The classification algorithms currently supported are:
Prediction Analysis for Microarrays ("pam")
Support Vector Machines ("svm")
Random Forests ("rf")
Linear Discriminant Analysis ("lda")
Shrinkage Linear Discriminant Analysis ("slda")
Shrinkage Diagonal Discriminant Analysis ("sdda")
Multinomial Logistic Regression using
Generalized Linear Model with no penalization ("mlr_glm")
GLM with LASSO penalty ("mlr_lasso")
GLM with ridge penalty ("mlr_ridge")
GLM with elastic net penalty ("mlr_enet")
Neural Networks ("mlr_nnet")
Neural Networks ("nnet")
Naive Bayes ("nbayes")
Adaptive Boosting ("adaboost")
AdaBoost.M1 ("adaboost_m1")
Extreme Gradient Boosting ("xgboost")
K-Nearest Neighbours ("knn")
data(hgsc)
class <- attr(hgsc, "class.true")
sl_result <- splendid_model(hgsc, class, n = 1, algorithms = "xgboost")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.