HF_CausalLMBeforeBatchTransform: HF_CausalLMBeforeBatchTransform

Description Usage Arguments Details Value

View source: R/blurr_hugging_face.R

Description

Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
HF_CausalLMBeforeBatchTransform(
  hf_arch,
  hf_tokenizer,
  max_length = NULL,
  padding = TRUE,
  truncation = TRUE,
  is_split_into_words = FALSE,
  n_tok_inps = 1,
  ignore_token_id = -100,
  ...
)

Arguments

hf_arch

architecture

hf_tokenizer

tokenizer

max_length

maximum length

padding

padding or not

truncation

truncation or not

is_split_into_words

to split into words

n_tok_inps

number tok inputs

ignore_token_id

ignore token id

...

additional arguments

Details

as a byproduct of the tokenization process in the 'encodes' method.

Value

None


fastai documentation built on July 28, 2021, 5:06 p.m.