View source: R/sits_validate.R
sits_validate | R Documentation |
One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set).
The function takes two arguments: a set of time series with a machine learning model and another set with validation samples. If the validation sample set is not provided, The sample dataset is split into two parts, as defined by the parameter validation_split. The accuracy is determined by the result of the validation test set.
This function returns the confusion matrix, and Kappa values.
sits_validate(
samples,
samples_validation = NULL,
validation_split = 0.2,
ml_method = sits_rfor()
)
samples |
Time series to be validated (class "sits"). |
samples_validation |
Optional: Time series used for validation (class "sits") |
validation_split |
Percent of original time series set to be used for validation if samples_validation is NULL (numeric value). |
ml_method |
Machine learning method (function) |
A caret::confusionMatrix
object to be used for
validation assessment.
Rolf Simoes, rolf.simoes@inpe.br
Gilberto Camara, gilberto.camara@inpe.br
if (sits_run_examples()) {
samples <- sits_sample(cerrado_2classes, frac = 0.5)
samples_validation <- sits_sample(cerrado_2classes, frac = 0.5)
conf_matrix_1 <- sits_validate(
samples = samples,
samples_validation = samples_validation,
ml_method = sits_rfor()
)
conf_matrix_2 <- sits_validate(
samples = cerrado_2classes,
validation_split = 0.2,
ml_method = sits_rfor()
)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.