View source: R/run_classifier.R
| create_model | R Documentation |
Takes the output layer from a BERT "spine" and appends a classifier layer to it. The output taken from BERT is the pooled first token layers (may want to modify the code to use token-level outputs). The classifier is essentially a single dense layer with softmax.
create_model( bert_config, is_training, input_ids, input_mask, segment_ids, labels, num_labels )
bert_config |
|
is_training |
Logical; TRUE for training model, FALSE for eval model. Controls whether dropout will be applied. |
input_ids |
Integer Tensor of shape |
input_mask |
Integer Tensor of shape |
segment_ids |
Integer Tensor of shape |
labels |
Integer Tensor; represents training example classification labels. Length = batch size. |
num_labels |
Integer; number of classification labels. |
A list including the loss (for training) and the model output (softmax probabilities, log probs).
## Not run:
with(tensorflow::tf$variable_scope("examples",
reuse = tensorflow::tf$AUTO_REUSE
), {
input_ids <- tensorflow::tf$constant(list(
list(31L, 51L, 99L),
list(15L, 5L, 0L)
))
input_mask <- tensorflow::tf$constant(list(
list(1L, 1L, 1L),
list(1L, 1L, 0L)
))
token_type_ids <- tensorflow::tf$constant(list(
list(0L, 0L, 1L),
list(0L, 2L, 0L)
))
config <- BertConfig(
vocab_size = 32000L,
hidden_size = 768L,
num_hidden_layers = 8L,
num_attention_heads = 12L,
intermediate_size = 1024L
)
class_model <- create_model(
bert_config = config,
is_training = TRUE,
input_ids = input_ids,
input_mask = input_mask,
segment_ids = token_type_ids,
labels = c(1L, 2L),
num_labels = 2L,
)
})
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.