Man pages for jonathanbratt/RBERT
R Implementation of BERT

AdamWeightDecayOptimizerConstructor for objects of class AdamWeightDecayOptimizer
apply_to_charsApply a function to each character in a string.
assert_rankConfirm the rank of a tensor
attention_layerBuild multi-headed attention layer
BasicTokenizerConstruct objects of BasicTokenizer class.
BertConfigConstruct objects of BertConfig class
bert_config_from_json_fileLoad BERT config object from json file
BertModelConstruct object of class BertModel
check_vocabCheck Vocabulary
clean_textPerform invalid character removal and whitespace cleanup on...
convert_by_vocabConvert a sequence of tokens/ids using the provided vocab.
convert_examples_to_featuresConvert 'InputExample's to 'InputFeatures'
convert_single_exampleConvert a single 'InputExample' into a single 'InputFeatures'
convert_to_unicodeConvert 'text' to Unicode
create_attention_mask_from_input_maskCreate 3D attention mask from a 2D tensor mask
create_initializerCreate truncated normal initializer
create_modelCreate a classification model
create_optimizerCreate an optimizer training op
dot-choose_BERT_dirChoose a directory for BERT checkpoints
dot-convert_examples_to_features_EFConvert 'InputExample_EF's to 'InputFeatures_EF'
dot-convert_single_example_EFConvert a single 'InputExample_EF' into a single...
dot-download_BERT_checkpointDownload a checkpoint zip file
dot-get_actual_indexStandardize Indices
dot-get_model_archive_pathLocate an archive file for a BERT checkpoint
dot-get_model_archive_typeGet archive type of a BERT checkpoint
dot-get_model_subdirLocate a subdir for a BERT checkpoint
dot-get_model_urlGet url of a BERT checkpoint
dot-has_checkpointCheck whether the user already has a checkpoint
dot-infer_archive_typeInfer the archive type for a BERT checkpoint
dot-infer_checkpoint_archive_pathInfer the path to the archive for a BERT checkpoint
dot-infer_ckpt_dirInfer the subdir for a BERT checkpoint
dot-infer_model_pathsFind Paths to Checkpoint Files
dot-InputFeatures_EFConstruct objects of class 'InputFeatures_FE'
dot-maybe_download_checkpointFind or Possibly Download a Checkpoint
dot-model_fn_builder_EFDefine 'model_fn' closure for 'TPUEstimator'
dot-process_BERT_checkpointUnzip and check a BERT checkpoint zip
download_BERT_checkpointDownload a BERT checkpoint
dropoutPerform Dropout
embedding_lookupLook up words embeddings for id tensor
embedding_postprocessorPerform various post-processing on a word embedding tensor
extract_featuresExtract output features from BERT
file_based_convert_examples_to_featuresConvert a set of 'InputExample's to a TFRecord file.
find_filesFind Checkpoint Files
FullTokenizerConstruct objects of FullTokenizer class.
geluGaussian Error Linear Unit
get_activationMap a string to a Python function
get_assignment_map_from_checkpointCompute the intersection of the current variables and...
get_shape_listReturn the shape of tensor
InputExampleConstruct objects of class 'InputExample'
InputExample_EFConstruct objects of class 'InputExample_EF'
InputFeaturesConstruct objects of class 'InputFeatures'
input_fn_builderCreate an 'input_fn' closure to be passed to TPUEstimator
input_fn_builder_EFCreate an 'input_fn' closure to be passed to TPUEstimator
is_chinese_charCheck whether cp is the codepoint of a CJK character.
is_controlCheck whether 'char' is a control character.
is_punctuationCheck whether 'char' is a punctuation character.
is_whitespaceCheck whether 'char' is a whitespace character.
layer_normRun layer normalization
layer_norm_and_dropoutRun layer normalization followed by dropout
load_vocabLoad a vocabulary file
make_examples_simpleEasily make examples for BERT
model_fn_builderDefine 'model_fn' closure for 'TPUEstimator'
reshape_from_matrixTurn a matrix into a tensor
reshape_to_matrixTurn a tensor into a matrix
set_BERT_dirSet the directory for BERT checkpoints
split_on_puncSplit text on punctuation.
strip_accentsStrip accents from a piece of text.
tokenizeTokenizers for various objects.
tokenize_chinese_charsAdd whitespace around any CJK character.
tokenize_textTokenize Text with Word Pieces
tokenize_wordTokenize a single "word" (no whitespace).
transformer_modelBuild multi-head, multi-layer Transformer
transpose_for_scoresReshape and transpose tensor
truncate_seq_pairTruncate a sequence pair to the maximum length.
whitespace_tokenizeRun basic whitespace cleaning and splitting on a piece of...
WordpieceTokenizerConstruct objects of WordpieceTokenizer class.
jonathanbratt/RBERT documentation built on Jan. 6, 2020, 7:06 a.m.