View source: R/1_1_textEmbed.R
textEmbedRawLayers | R Documentation |
Extract layers of hidden states (word embeddings) for all character variables in a given dataframe.
textEmbedRawLayers(
texts,
model = "bert-base-uncased",
layers = -2,
return_tokens = TRUE,
word_type_embeddings = FALSE,
decontextualize = FALSE,
keep_token_embeddings = TRUE,
device = "cpu",
tokenizer_parallelism = FALSE,
model_max_length = NULL,
max_token_to_sentence = 4,
logging_level = "error",
sort = TRUE
)
texts |
A character variable or a tibble/dataframe with at least one character variable. |
model |
Character string specifying pre-trained language model (default 'bert-base-uncased'). For full list of options see pretrained models at HuggingFace. For example use "bert-base-multilingual-cased", "openai-gpt", "gpt2", "ctrl", "transfo-xl-wt103", "xlnet-base-cased", "xlm-mlm-enfr-1024", "distilbert-base-cased", "roberta-base", or "xlm-roberta-base". Only load models that you trust from HuggingFace; loading a malicious model can execute arbitrary code on your computer). |
layers |
(string or numeric) Specify the layers that should be extracted (default -2, which give the second to last layer). It is more efficient to only extract the layers that you need (e.g., 11). You can also extract several (e.g., 11:12), or all by setting this parameter to "all". Layer 0 is the decontextualized input layer (i.e., not comprising hidden states) and thus should normally not be used. These layers can then be aggregated in the textEmbedLayerAggregation function. |
return_tokens |
If TRUE, provide the tokens used in the specified transformer model. |
word_type_embeddings |
(boolean) Wether to provide embeddings for each word/token type. |
decontextualize |
(boolean) Wether to dectonextualise embeddings (i.e., embedding one word at a time). |
keep_token_embeddings |
(boolean) Whether to keep token level embeddings in the output (when using word_types aggregation) |
device |
Name of device to use: 'cpu', 'gpu', 'gpu:k' or 'mps'/'mps:k' for MacOS, where k is a specific device number. |
tokenizer_parallelism |
If TRUE this will turn on tokenizer parallelism. Default FALSE. |
model_max_length |
The maximum length (in number of tokens) for the inputs to the transformer model (default the value stored for the associated model). |
max_token_to_sentence |
(numeric) Maximum number of tokens in a string to handle before switching to embedding text sentence by sentence. |
logging_level |
Set the logging level. Default: "warning". Options (ordered from less logging to more logging): critical, error, warning, info, debug |
sort |
(boolean) If TRUE sort the output to tidy format. |
Returns hiddenstates/layers that can be 1. Can return three different outputA tibble with tokens, column specifying layer and word embeddings. Note that layer 0 is the input embedding to the transformer, and should normally not be used.
see textEmbedLayerAggregation
and textEmbed
# texts <- Language_based_assessment_data_8[1:2, 1:2]
# word_embeddings_with_layers <- textEmbedRawLayers(texts, layers = 11:12)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.