View source: R/sits_lighttae.R
sits_lighttae | R Documentation |
Implementation of Light Temporal Attention Encoder (L-TAE) for satellite image time series
This function is based on the paper by Vivien Garnot referenced below and code available on github at https://github.com/VSainteuf/lightweight-temporal-attention-pytorch If you use this method, please cite the original TAE and the LTAE paper.
We also used the code made available by Maja Schneider in her work with Marco Körner referenced below and available at https://github.com/maja601/RC2020-psetae.
sits_lighttae(
samples = NULL,
samples_validation = NULL,
epochs = 150,
batch_size = 128,
validation_split = 0.2,
optimizer = torch::optim_adamw,
opt_hparams = list(lr = 5e-04, eps = 1e-08, weight_decay = 7e-04),
lr_decay_epochs = 50L,
lr_decay_rate = 1,
patience = 20L,
min_delta = 0.01,
verbose = FALSE
)
samples |
Time series with the training samples (tibble of class "sits"). |
samples_validation |
Time series with the validation samples
(tibble of class "sits").
If |
epochs |
Number of iterations to train the model (integer, min = 1, max = 20000). |
batch_size |
Number of samples per gradient update (integer, min = 16L, max = 2048L) |
validation_split |
Fraction of training data to be used as validation data. |
optimizer |
Optimizer function to be used. |
opt_hparams |
Hyperparameters for optimizer:
|
lr_decay_epochs |
Number of epochs to reduce learning rate. |
lr_decay_rate |
Decay factor for reducing learning rate. |
patience |
Number of epochs without improvements until training stops. |
min_delta |
Minimum improvement in loss function to reset the patience counter. |
verbose |
Verbosity mode (TRUE/FALSE). Default is FALSE. |
A fitted model to be used for classification of data cubes.
Gilberto Camara, gilberto.camara@inpe.br
Rolf Simoes, rolf.simoes@inpe.br
Charlotte Pelletier, charlotte.pelletier@univ-ubs.fr
Vivien Garnot, Loic Landrieu, Sebastien Giordano, and Nesrine Chehata, "Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention", 2020 Conference on Computer Vision and Pattern Recognition. pages 12322-12331. DOI: 10.1109/CVPR42600.2020.01234
Vivien Garnot, Loic Landrieu, "Lightweight Temporal Self-Attention for Classifying Satellite Images Time Series", arXiv preprint arXiv:2007.00586, 2020.
Schneider, Maja; Körner, Marco, "[Re] Satellite Image Time Series Classification with Pixel-Set Encoders and Temporal Self-Attention." ReScience C 7 (2), 2021. DOI: 10.5281/zenodo.4835356
if (sits_run_examples()) {
# create a lightTAE model
torch_model <- sits_train(samples_modis_ndvi, sits_lighttae())
# plot the model
plot(torch_model)
# create a data cube from local files
data_dir <- system.file("extdata/raster/mod13q1", package = "sits")
cube <- sits_cube(
source = "BDC",
collection = "MOD13Q1-6.1",
data_dir = data_dir
)
# classify a data cube
probs_cube <- sits_classify(
data = cube, ml_model = torch_model, output_dir = tempdir()
)
# plot the probability cube
plot(probs_cube)
# smooth the probability cube using Bayesian statistics
bayes_cube <- sits_smooth(probs_cube, output_dir = tempdir())
# plot the smoothed cube
plot(bayes_cube)
# label the probability cube
label_cube <- sits_label_classification(
bayes_cube,
output_dir = tempdir()
)
# plot the labelled cube
plot(label_cube)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.