HF_QABatchTransform: HF_QABatchTransform

View source: R/blurr_hugging_face.R

HF_QABatchTransformR Documentation

HF_QABatchTransform

Description

Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced

Usage

HF_QABatchTransform(
  hf_arch,
  hf_tokenizer,
  max_length = NULL,
  padding = TRUE,
  truncation = TRUE,
  is_split_into_words = FALSE,
  n_tok_inps = 1,
  hf_input_return_type = HF_QuestionAnswerInput(),
  ...
)

Arguments

hf_arch

architecture

hf_tokenizer

tokenizer

max_length

maximum length

padding

padding

truncation

truncation

is_split_into_words

to split into words or not

n_tok_inps

number of tok inputs

hf_input_return_type

input return type

...

additional arguments

Details

as a byproduct of the tokenization process in the 'encodes' method.

Value

None


fastai documentation built on March 21, 2022, 9:07 a.m.