BaseModelMPNet: MPNet

BaseModelMPNetR Documentation

MPNet

Description

Represents models based on MPNet.

Value

Does return a new object of this class.

Super classes

aifeducation::AIFEMaster -> aifeducation::AIFEBaseModel -> aifeducation::BaseModelCore -> BaseModelMPNet

Methods

Public methods

Inherited methods

Method configure()

Configures a new object of this class. Please ensure that your chosen configuration comply with the following guidelines:

  • hidden_size is a multiple of num_attention_heads.

Usage
BaseModelMPNet$configure(
  tokenizer,
  max_position_embeddings = 512L,
  hidden_size = 768L,
  num_hidden_layers = 12L,
  num_attention_heads = 12L,
  intermediate_size = 3072L,
  hidden_act = "GELU",
  hidden_dropout_prob = 0.1,
  attention_probs_dropout_prob = 0.1
)
Arguments
tokenizer

TokenizerBase Tokenizer for the model.

max_position_embeddings

int Number of maximum position embeddings. This parameter also determines the maximum length of a sequence which can be processed with the model. Allowed values: 10 <= x <= 4048

hidden_size

int Number of neurons in each layer. This parameter determines the dimensionality of the resulting text embedding. Allowed values: 1 <= x <= 2048

num_hidden_layers

int Number of hidden layers. Allowed values: 1 <= x

num_attention_heads

int determining the number of attention heads for a self-attention layer. Only relevant if attention_type='multihead' Allowed values: 0 <= x

intermediate_size

int determining the size of the projection layer within a each transformer encoder. Allowed values: 1 <= x

hidden_act

string Name of the activation function. Allowed values: 'GELU', 'relu', 'silu', 'gelu_new'

hidden_dropout_prob

double Ratio of dropout. Allowed values: 0 <= x <= 0.6

attention_probs_dropout_prob

double Ratio of dropout for attention probabilities. Allowed values: 0 <= x <= 0.6

Returns

Does nothing return.


Method train()

Traines a BaseModel

Usage
BaseModelMPNet$train(
  text_dataset,
  p_mask = 0.15,
  p_perm = 0.15,
  whole_word = TRUE,
  val_size = 0.1,
  n_epoch = 1L,
  batch_size = 12L,
  max_sequence_length = 250L,
  full_sequences_only = FALSE,
  min_seq_len = 50L,
  learning_rate = 0.003,
  sustain_track = FALSE,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15L,
  sustain_log_level = "warning",
  trace = TRUE,
  pytorch_trace = 1L,
  log_dir = NULL,
  log_write_interval = 2L
)
Arguments
text_dataset

LargeDataSetForText LargeDataSetForText Object storing textual data.

p_mask

double Ratio that determines the number of tokens used for masking. Allowed values: 0.05 <= x <= 0.6

p_perm

double Ratio that determines the number of tokens used for permutation. Allowed values: 0.05 <= x <= 0.6

whole_word

bool * TRUE: whole word masking should be applied. Only relevant if a WordPieceTokenizer is used.

  • FALSE: token masking is used.

val_size

double between 0 and 1, indicating the proportion of cases which should be used for the validation sample during the estimation of the model. The remaining cases are part of the training data. Allowed values: 0 < x < 1

n_epoch

int Number of training epochs. Allowed values: 1 <= x

batch_size

int Size of the batches for training. Allowed values: 1 <= x

max_sequence_length

int Maximal number of tokens for every sequence. Allowed values: 20 <= x

full_sequences_only

bool TRUE for using only chunks with a sequence length equal to chunk_size.

min_seq_len

int Only relevant if full_sequences_only = FALSE. Value determines the minimal sequence length included in training process. Allowed values: 10 <= x

learning_rate

double Initial learning rate for the training. Allowed values: 0 < x <= 1

sustain_track

bool If TRUE energy consumption is tracked during training via the python library 'codecarbon'.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any

sustain_region

string Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any

sustain_interval

int Interval in seconds for measuring power usage. Allowed values: 1 <= x

sustain_log_level

string Level for printing information to the console. Allowed values: 'debug', 'info', 'warning', 'error', 'critical'

trace

bool TRUE if information about the estimation phase should be printed to the console.

pytorch_trace

int ml_trace=0 does not print any information about the training process from pytorch on the console. Allowed values: 0 <= x <= 1

log_dir

string Path to the directory where the log files should be saved. If no logging is desired set this argument to NULL. Allowed values: any

log_write_interval

int Time in seconds determining the interval in which the logger should try to update the log files. Only relevant if log_dir is not NULL. Allowed values: 1 <= x

Returns

Does nothing return.


Method clone()

The objects of this class are cloneable with this method.

Usage
BaseModelMPNet$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

References

Song,K., Tan, X., Qin, T., Lu, J. & Liu, T.-Y. (2020). MPNet: Masked and Permuted Pre-training for Language Understanding. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.48550/arXiv.2004.09297")}

See Also

Other Base Model: BaseModelBert, BaseModelDebertaV2, BaseModelFunnel, BaseModelModernBert, BaseModelRoberta


aifeducation documentation built on Nov. 19, 2025, 5:08 p.m.