View source: R/luz_callbacks.R
| luz_callback_bert_tokenize | R Documentation | 
Data used in pretrained BERT models must be tokenized in the way the model
expects. This luz_callback checks that the incoming data is tokenized
properly, and triggers tokenization if necessary. This function should be
passed to luz::fit.luz_module_generator() or
luz::predict.luz_module_fitted() via the callbacks argument, not called
directly.
luz_callback_bert_tokenize(
  submodel_name = NULL,
  n_tokens = NULL,
  verbose = TRUE
)
| submodel_name | An optional character scalar identifying a model inside
the main  | 
| n_tokens | An optional integer scalar indicating the number of tokens to
which the data should be tokenized. If present it must be equal to or less
than the  | 
| verbose | A logical scalar indicating whether the callback should report
its progress (default  | 
if (rlang::is_installed("luz")) {
  luz_callback_bert_tokenize()
  luz_callback_bert_tokenize(n_tokens = 32L)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.