runCrossValidation: Run Cross-Validation for A List of Algoirthms with Peak...

Description Usage Arguments Value Examples

View source: R/runCrossValidation.R

Description

Wrapper function for running cross-validation on up to 8 classification algorithms using one or more of the three available metrics sets.

Usage

1
2
3
4
5
6
7
8
runCrossValidation(
  trainData,
  k,
  repNum,
  rand.seed = NULL,
  models = "all",
  metricSet = "M11"
)

Arguments

trainData

dataframe. Rows should correspond to peaks, columns should include peak quality metrics and class labels only.

k

integer. Number of folds to be used in cross-validation

repNum

integer. Number of cross-validation rounds to perform

rand.seed

integer. State in which to set the random number generator

models

character string or vector. Specifies the classification algorithms to be trained from the eight available: DecisionTree, LogisiticRegression, NaiveBayes, RandomForest, SVM_Linear, AdaBoost, NeuralNetwork, and ModelAveragedNeuralNetwork. "all" specifies the use of all models. Default is "all".

metricSet

The metric set(s) to be run with the selected model(s). Select from the following: M4, M7, and M11. Use c() to select multiple metrics. "all" specifics the use of all metrics. Default is "M11".

Value

a list of up to 8 trained models

Examples

1
2
3
# train classification algorithms
models <- trainClassifiers(trainData=pqMetrics_development, k=5, repNum=10,
 rand.seed = 453, models="DecisionTree")

MetaClean documentation built on Jan. 13, 2021, 6:30 p.m.