AdamWeightDecayOptimizer | Constructor for objects of class AdamWeightDecayOptimizer |
apply_to_chars | Apply a function to each character in a string. |
assert_rank | Confirm the rank of a tensor |
attention_layer | Build multi-headed attention layer |
BasicTokenizer | Construct objects of BasicTokenizer class. |
BertConfig | Construct objects of BertConfig class |
bert_config_from_json_file | Load BERT config object from json file |
BertModel | Construct object of class BertModel |
check_vocab | Check Vocabulary |
clean_text | Perform invalid character removal and whitespace cleanup on... |
convert_by_vocab | Convert a sequence of tokens/ids using the provided vocab. |
convert_examples_to_features | Convert 'InputExample's to 'InputFeatures' |
convert_single_example | Convert a single 'InputExample' into a single 'InputFeatures' |
convert_to_unicode | Convert 'text' to Unicode |
create_attention_mask_from_input_mask | Create 3D attention mask from a 2D tensor mask |
create_initializer | Create truncated normal initializer |
create_model | Create a classification model |
create_optimizer | Create an optimizer training op |
dot-choose_BERT_dir | Choose a directory for BERT checkpoints |
dot-convert_examples_to_features_EF | Convert 'InputExample_EF's to 'InputFeatures_EF' |
dot-convert_single_example_EF | Convert a single 'InputExample_EF' into a single... |
dot-download_BERT_checkpoint | Download a checkpoint zip file |
dot-get_actual_index | Standardize Indices |
dot-get_model_archive_path | Locate an archive file for a BERT checkpoint |
dot-get_model_archive_type | Get archive type of a BERT checkpoint |
dot-get_model_subdir | Locate a subdir for a BERT checkpoint |
dot-get_model_url | Get url of a BERT checkpoint |
dot-has_checkpoint | Check whether the user already has a checkpoint |
dot-infer_archive_type | Infer the archive type for a BERT checkpoint |
dot-infer_checkpoint_archive_path | Infer the path to the archive for a BERT checkpoint |
dot-infer_ckpt_dir | Infer the subdir for a BERT checkpoint |
dot-infer_model_paths | Find Paths to Checkpoint Files |
dot-InputFeatures_EF | Construct objects of class 'InputFeatures_FE' |
dot-maybe_download_checkpoint | Find or Possibly Download a Checkpoint |
dot-model_fn_builder_EF | Define 'model_fn' closure for 'TPUEstimator' |
dot-process_BERT_checkpoint | Unzip and check a BERT checkpoint zip |
download_BERT_checkpoint | Download a BERT checkpoint |
dropout | Perform Dropout |
embedding_lookup | Look up words embeddings for id tensor |
embedding_postprocessor | Perform various post-processing on a word embedding tensor |
extract_features | Extract output features from BERT |
file_based_convert_examples_to_features | Convert a set of 'InputExample's to a TFRecord file. |
file_based_input_fn_builder | summary |
find_files | Find Checkpoint Files |
FullTokenizer | Construct objects of FullTokenizer class. |
gelu | Gaussian Error Linear Unit |
get_activation | Map a string to a Python function |
get_assignment_map_from_checkpoint | Compute the intersection of the current variables and... |
get_shape_list | Return the shape of tensor |
InputExample | Construct objects of class 'InputExample' |
InputExample_EF | Construct objects of class 'InputExample_EF' |
InputFeatures | Construct objects of class 'InputFeatures' |
input_fn_builder | Create an 'input_fn' closure to be passed to TPUEstimator |
input_fn_builder_EF | Create an 'input_fn' closure to be passed to TPUEstimator |
is_chinese_char | Check whether cp is the codepoint of a CJK character. |
is_control | Check whether 'char' is a control character. |
is_punctuation | Check whether 'char' is a punctuation character. |
is_whitespace | Check whether 'char' is a whitespace character. |
layer_norm | Run layer normalization |
layer_norm_and_dropout | Run layer normalization followed by dropout |
load_vocab | Load a vocabulary file |
make_examples_simple | Easily make examples for BERT |
model_fn_builder | Define 'model_fn' closure for 'TPUEstimator' |
reshape_from_matrix | Turn a matrix into a tensor |
reshape_to_matrix | Turn a tensor into a matrix |
set_BERT_dir | Set the directory for BERT checkpoints |
split_on_punc | Split text on punctuation. |
strip_accents | Strip accents from a piece of text. |
tokenize | Tokenizers for various objects. |
tokenize_chinese_chars | Add whitespace around any CJK character. |
tokenize_text | Tokenize Text with Word Pieces |
tokenize_word | Tokenize a single "word" (no whitespace). |
transformer_model | Build multi-head, multi-layer Transformer |
transpose_for_scores | Reshape and transpose tensor |
truncate_seq_pair | Truncate a sequence pair to the maximum length. |
whitespace_tokenize | Run basic whitespace cleaning and splitting on a piece of... |
WordpieceTokenizer | Construct objects of WordpieceTokenizer class. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.