Description Usage Arguments Value Author(s)
Splits the set of time series into training and validation and perform k-fold cross-validation. Cross-validation is a model validation technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set).
The k-fold cross validation method involves splitting the dataset into k-subsets. For each subset is held out while the model is trained on all other subsets. This process is completed until accuracy is determine for each instance in the dataset, and an overall accuracy estimate is provided.
This function returns the Overall Accuracy, User's Accuracy, Producer's Accuracy, error matrix (confusion matrix), and Kappa values.
1 2 3 4 | sits_kfold_validate(data.tb, bands = NULL, folds = 5,
pt_method = sits_gam(bands = bands),
dist_method = sits_TWDTW_distances(bands = bands), tr_method = sits_svm(),
multicores = 1)
|
data.tb |
a SITS tibble |
bands |
the bands used for classification |
folds |
number of partitions to create. |
pt_method |
method to create patterns (sits_patterns_gam, sits_dendogram) |
dist_method |
method to compute distances (e.g., sits_TWDTW_distances) |
tr_method |
machine learning training method |
multicores |
number of threads to process the validation (Linux only). Each process will run a whole partition validation. |
conf.tb a tibble containing pairs of reference and predicted values
Rolf Simoes, rolf.simoes@inpe.br
Gilberto Camara, gilberto.camara@inpe.br
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.