sits_tempcnn | R Documentation |
Use a TempCNN algorithm to classify data, which has two stages: a 1D CNN and a multi-layer perceptron. Users can define the depth of the 1D network, as well as the number of perceptron layers.
This function is based on the paper by Charlotte Pelletier referenced below. If you use this method, please cite the original tempCNN paper.
The torch version is based on the code made available by the BreizhCrops team: Marc Russwurm, Charlotte Pelletier, Marco Korner, Maximilian Zollner. The original python code is available at the website https://github.com/dl4sits/BreizhCrops. This code is licensed as GPL-3.
sits_tempcnn(
samples = NULL,
samples_validation = NULL,
cnn_layers = c(64, 64, 64),
cnn_kernels = c(5, 5, 5),
cnn_dropout_rates = c(0.2, 0.2, 0.2),
dense_layer_nodes = 256,
dense_layer_dropout_rate = 0.5,
epochs = 150,
batch_size = 64,
validation_split = 0.2,
optimizer = torch::optim_adamw,
opt_hparams = list(lr = 5e-04, eps = 1e-08, weight_decay = 1e-06),
lr_decay_epochs = 1,
lr_decay_rate = 0.95,
patience = 20,
min_delta = 0.01,
verbose = FALSE
)
samples |
Time series with the training samples. |
samples_validation |
Time series with the validation samples. if the
|
cnn_layers |
Number of 1D convolutional filters per layer |
cnn_kernels |
Size of the 1D convolutional kernels. |
cnn_dropout_rates |
Dropout rates for 1D convolutional filters. |
dense_layer_nodes |
Number of nodes in the dense layer. |
dense_layer_dropout_rate |
Dropout rate (0,1) for the dense layer. |
epochs |
Number of iterations to train the model. |
batch_size |
Number of samples per gradient update. |
validation_split |
Fraction of training data to be used for validation. |
optimizer |
Optimizer function to be used. |
opt_hparams |
Hyperparameters for optimizer: lr : Learning rate of the optimizer eps: Term added to the denominator to improve numerical stability. weight_decay: L2 regularization |
lr_decay_epochs |
Number of epochs to reduce learning rate. |
lr_decay_rate |
Decay factor for reducing learning rate. |
patience |
Number of epochs without improvements until training stops. |
min_delta |
Minimum improvement in loss function to reset the patience counter. |
verbose |
Verbosity mode (TRUE/FALSE). Default is FALSE. |
A fitted model to be used for classification.
Please refer to the sits documentation available in <https://e-sensing.github.io/sitsbook/> for detailed examples.
Charlotte Pelletier, charlotte.pelletier@univ-ubs.fr
Gilberto Camara, gilberto.camara@inpe.br
Rolf Simoes, rolf.simoes@inpe.br
Felipe Souza, lipecaso@gmail.com
Charlotte Pelletier, Geoffrey Webb and François Petitjean, "Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series", Remote Sensing, 11,523, 2019. DOI: 10.3390/rs11050523.
if (sits_run_examples()) {
# create a TempCNN model
torch_model <- sits_train(samples_modis_ndvi,
sits_tempcnn(epochs = 20, verbose = TRUE))
# plot the model
plot(torch_model)
# create a data cube from local files
data_dir <- system.file("extdata/raster/mod13q1", package = "sits")
cube <- sits_cube(
source = "BDC",
collection = "MOD13Q1-6.1",
data_dir = data_dir
)
# classify a data cube
probs_cube <- sits_classify(
data = cube, ml_model = torch_model, output_dir = tempdir()
)
# plot the probability cube
plot(probs_cube)
# smooth the probability cube using Bayesian statistics
bayes_cube <- sits_smooth(probs_cube, output_dir = tempdir())
# plot the smoothed cube
plot(bayes_cube)
# label the probability cube
label_cube <- sits_label_classification(
bayes_cube,
output_dir = tempdir()
)
# plot the labelled cube
plot(label_cube)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.