Nothing
LearnerTorchModel
can now be parallelized and trained with
encapsulation activated.jit_trace
now works in combination with batch normalization.R6
version 2.6.0LearnerTorch$.dataloader()
method now operates no longer
on the task
but on the dataset
generated by the private LearnerTorch$.dataset()
method.shuffle
parameter during model training is now initialized to TRUE
to sidestep
issues where data is sorted.jit_trace
parameter was added to LearnerTorch
, which when set to
TRUE
can lead to significant speedups.
This should only be enabled for 'static' models, see the
torch tutorial
for more information.num_interop_threads
to LearnerTorch
.tensor_dataset
parameter was added, which allows to stack all batches
at the beginning of training to make loading of batches afterwards faster.PipeOp
for adaptive average pooling.n_layers
parameter was added to the MLP learner.AutoTuner
.epochs - patience
for the internally tuned
values instead of the trained number of epochs
as it was before.dataset
of a learner must no longer return the tensors on the specified device
,
which allows for parallel dataloading on GPUs.PipeOpBlock
should no longer create ID clashes with other PipeOps in the graph (#260).data_formats
anymoreCallbackSetTB
, which allows logging that can be viewed by TensorBoard.PipeOps
such as po("trafo_resize")
which failed in some cases.LearnerTabResnet
now works correctlynn()
helper function to simplify the creation of neural network
layersAny scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.