WordpieceTokenizer: Construct objects of WordpieceTokenizer class.

View source: R/tokenization.R

WordpieceTokenizerR Documentation

Construct objects of WordpieceTokenizer class.

Description

(I'm not sure that this object-based approach is best for R implementation, but for now just trying to reproduce python functionality.)

Usage

WordpieceTokenizer(vocab, unk_token = "[UNK]", max_input_chars_per_word = 200)

Arguments

vocab

Recognized vocabulary tokens, as a named integer vector. (Name is token, value is index.)

unk_token

Token to use for unknown words.

max_input_chars_per_word

Length of longest word we will recognize.

Details

Has method: tokenize.WordpieceTokenizer()

Value

an object of class WordpieceTokenizer

Examples

## Not run: 
vocab <- load_vocab(vocab_file = "vocab.txt")
wp_tokenizer <- WordpieceTokenizer(vocab)

## End(Not run)

jonathanbratt/RBERT documentation built on Jan. 26, 2023, 4:15 p.m.