| BaseModelMPNet | R Documentation |
Represents models based on MPNet.
Does return a new object of this class.
aifeducation::AIFEMaster -> aifeducation::AIFEBaseModel -> aifeducation::BaseModelCore -> BaseModelMPNet
aifeducation::AIFEMaster$get_all_fields()aifeducation::AIFEMaster$get_documentation_license()aifeducation::AIFEMaster$get_ml_framework()aifeducation::AIFEMaster$get_model_config()aifeducation::AIFEMaster$get_model_description()aifeducation::AIFEMaster$get_model_info()aifeducation::AIFEMaster$get_model_license()aifeducation::AIFEMaster$get_package_versions()aifeducation::AIFEMaster$get_private()aifeducation::AIFEMaster$get_publication_info()aifeducation::AIFEMaster$get_sustainability_data()aifeducation::AIFEMaster$is_configured()aifeducation::AIFEMaster$is_trained()aifeducation::AIFEMaster$set_documentation_license()aifeducation::AIFEMaster$set_model_description()aifeducation::AIFEMaster$set_model_license()aifeducation::BaseModelCore$calc_flops_architecture_based()aifeducation::BaseModelCore$count_parameter()aifeducation::BaseModelCore$create_from_hf()aifeducation::BaseModelCore$estimate_sustainability_inference_fill_mask()aifeducation::BaseModelCore$fill_mask()aifeducation::BaseModelCore$get_final_size()aifeducation::BaseModelCore$get_flops_estimates()aifeducation::BaseModelCore$get_model()aifeducation::BaseModelCore$get_model_type()aifeducation::BaseModelCore$get_n_layers()aifeducation::BaseModelCore$get_special_tokens()aifeducation::BaseModelCore$get_tokenizer_statistics()aifeducation::BaseModelCore$load_from_disk()aifeducation::BaseModelCore$plot_training_history()aifeducation::BaseModelCore$save()aifeducation::BaseModelCore$set_publication_info()configure()Configures a new object of this class. Please ensure that your chosen configuration comply with the following guidelines:
hidden_size is a multiple of num_attention_heads.
BaseModelMPNet$configure( tokenizer, max_position_embeddings = 512L, hidden_size = 768L, num_hidden_layers = 12L, num_attention_heads = 12L, intermediate_size = 3072L, hidden_act = "GELU", hidden_dropout_prob = 0.1, attention_probs_dropout_prob = 0.1 )
tokenizerTokenizerBase Tokenizer for the model.
max_position_embeddingsint Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which
can be processed with the model. Allowed values: 10 <= x <= 4048
hidden_sizeint Number of neurons in each layer. This parameter determines the dimensionality of the resulting text
embedding. Allowed values: 1 <= x <= 2048
num_hidden_layersint Number of hidden layers. Allowed values: 1 <= x
num_attention_headsint determining the number of attention heads for a self-attention layer. Only relevant if attention_type='multihead' Allowed values: 0 <= x
intermediate_sizeint determining the size of the projection layer within a each transformer encoder. Allowed values: 1 <= x
hidden_actstring Name of the activation function. Allowed values: 'GELU', 'relu', 'silu', 'gelu_new'
hidden_dropout_probdouble Ratio of dropout. Allowed values: 0 <= x <= 0.6
attention_probs_dropout_probdouble Ratio of dropout for attention probabilities. Allowed values: 0 <= x <= 0.6
Does nothing return.
train()Traines a BaseModel
BaseModelMPNet$train( text_dataset, p_mask = 0.15, p_perm = 0.15, whole_word = TRUE, val_size = 0.1, n_epoch = 1L, batch_size = 12L, max_sequence_length = 250L, full_sequences_only = FALSE, min_seq_len = 50L, learning_rate = 0.003, sustain_track = FALSE, sustain_iso_code = NULL, sustain_region = NULL, sustain_interval = 15L, sustain_log_level = "warning", trace = TRUE, pytorch_trace = 1L, log_dir = NULL, log_write_interval = 2L )
text_datasetLargeDataSetForText LargeDataSetForText Object storing textual data.
p_maskdouble Ratio that determines the number of tokens used for masking. Allowed values: 0.05 <= x <= 0.6
p_permdouble Ratio that determines the number of tokens used for permutation. Allowed values: 0.05 <= x <= 0.6
whole_wordbool * TRUE: whole word masking should be applied. Only relevant if a WordPieceTokenizer is used.
FALSE: token masking is used.
val_sizedouble between 0 and 1, indicating the proportion of cases which should be
used for the validation sample during the estimation of the model.
The remaining cases are part of the training data. Allowed values: 0 < x < 1
n_epochint Number of training epochs. Allowed values: 1 <= x
batch_sizeint Size of the batches for training. Allowed values: 1 <= x
max_sequence_lengthint Maximal number of tokens for every sequence. Allowed values: 20 <= x
full_sequences_onlybool TRUE for using only chunks with a sequence length equal to chunk_size.
min_seq_lenint Only relevant if full_sequences_only = FALSE. Value determines the minimal sequence length included in
training process. Allowed values: 10 <= x
learning_ratedouble Initial learning rate for the training. Allowed values: 0 < x <= 1
sustain_trackbool If TRUE energy consumption is tracked during training via the python library 'codecarbon'.
sustain_iso_codestring ISO code (Alpha-3-Code) for the country. This variable must be set if
sustainability should be tracked. A list can be found on Wikipedia:
https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any
sustain_regionstring Region within a country. Only available for USA and Canada See the documentation of
codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any
sustain_intervalint Interval in seconds for measuring power usage. Allowed values: 1 <= x
sustain_log_levelstring Level for printing information to the console. Allowed values: 'debug', 'info', 'warning', 'error', 'critical'
tracebool TRUE if information about the estimation phase should be printed to the console.
pytorch_traceint ml_trace=0 does not print any information about the training process from pytorch on the console. Allowed values: 0 <= x <= 1
log_dirstring Path to the directory where the log files should be saved.
If no logging is desired set this argument to NULL. Allowed values: any
log_write_intervalint Time in seconds determining the interval in which the logger should try to update
the log files. Only relevant if log_dir is not NULL. Allowed values: 1 <= x
Does nothing return.
clone()The objects of this class are cloneable with this method.
BaseModelMPNet$clone(deep = FALSE)
deepWhether to make a deep clone.
Song,K., Tan, X., Qin, T., Lu, J. & Liu, T.-Y. (2020). MPNet: Masked and Permuted Pre-training for Language Understanding. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.48550/arXiv.2004.09297")}
Other Base Model:
BaseModelBert,
BaseModelDebertaV2,
BaseModelFunnel,
BaseModelModernBert,
BaseModelRoberta
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.