BertModel | R Documentation |
An object of class BertModel has several elements:
float Tensor of shape [batch_size, seq_length,
hidden_size]
corresponding to the output of the embedding layer, after
summing the word embeddings with the positional embeddings and the token type
embeddings, then performing layer normalization. This is the input to the
transformer.
The table for the token embeddings.
A list of float Tensors of shape [batch_size,
seq_length, hidden_size]
, corresponding to all the hidden transformer
layers.
float Tensor of shape [batch_size, seq_length,
hidden_size]
corresponding to the final hidden layer of the transformer
encoder.
The dense layer on top of the hidden layer for the first token.
BertModel( config, is_training, input_ids, input_mask = NULL, token_type_ids = NULL, scope = NULL )
config |
|
is_training |
Logical; TRUE for training model, FALSE for eval model. Controls whether dropout will be applied. |
input_ids |
Int32 Tensor of shape |
input_mask |
(optional) Int32 Tensor of shape |
token_type_ids |
(optional) Int32 Tensor of shape |
scope |
(optional) Character; name for variable scope. Defaults to "bert". |
An object of class BertModel.
## Not run: with(tensorflow::tf$variable_scope("examples", reuse = tensorflow::tf$AUTO_REUSE ), { input_ids <- tensorflow::tf$constant(list( list(31L, 51L, 99L), list(15L, 5L, 0L) )) input_mask <- tensorflow::tf$constant(list( list(1L, 1L, 1L), list(1L, 1L, 0L) )) token_type_ids <- tensorflow::tf$constant(list( list(0L, 0L, 1L), list(0L, 2L, 0L) )) config <- BertConfig( vocab_size = 32000L, hidden_size = 768L, num_hidden_layers = 8L, num_attention_heads = 12L, intermediate_size = 1024L ) model <- BertModel( config = config, is_training = TRUE, input_ids = input_ids, input_mask = input_mask, token_type_ids = token_type_ids ) }) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.