BaseModelCore: Abstract class for all BaseModels

BaseModelCoreR Documentation

Abstract class for all BaseModels

Description

This class contains all methods shared by all BaseModels.

Value

Does return a new object of this class.

Super classes

aifeducation::AIFEMaster -> aifeducation::AIFEBaseModel -> BaseModelCore

Public fields

Tokenizer

('TokenizerBase')
Objects of class TokenizerBase.

Methods

Public methods

Inherited methods

Method create_from_hf()

Creates BaseModel from a pretrained model

Usage
BaseModelCore$create_from_hf(model_dir = NULL, tokenizer_dir = NULL)
Arguments
model_dir

Path where the model is stored.

tokenizer_dir

string Path to the directory where the tokenizer is saved. Allowed values: any

Returns

Does return a new object of this class.


Method train()

Traines a BaseModel

Usage
BaseModelCore$train(
  text_dataset,
  p_mask = 0.15,
  whole_word = TRUE,
  val_size = 0.1,
  n_epoch = 1L,
  batch_size = 12L,
  max_sequence_length = 250L,
  full_sequences_only = FALSE,
  min_seq_len = 50L,
  learning_rate = 0.003,
  sustain_track = FALSE,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15L,
  sustain_log_level = "warning",
  trace = TRUE,
  pytorch_trace = 1L,
  log_dir = NULL,
  log_write_interval = 2L
)
Arguments
text_dataset

LargeDataSetForText LargeDataSetForText Object storing textual data.

p_mask

double Ratio that determines the number of tokens used for masking. Allowed values: 0.05 <= x <= 0.6

whole_word

bool * TRUE: whole word masking should be applied. Only relevant if a WordPieceTokenizer is used.

  • FALSE: token masking is used.

val_size

double between 0 and 1, indicating the proportion of cases which should be used for the validation sample during the estimation of the model. The remaining cases are part of the training data. Allowed values: 0 < x < 1

n_epoch

int Number of training epochs. Allowed values: 1 <= x

batch_size

int Size of the batches for training. Allowed values: 1 <= x

max_sequence_length

int Maximal number of tokens for every sequence. Allowed values: 20 <= x

full_sequences_only

bool TRUE for using only chunks with a sequence length equal to chunk_size.

min_seq_len

int Only relevant if full_sequences_only = FALSE. Value determines the minimal sequence length included in training process. Allowed values: 10 <= x

learning_rate

double Initial learning rate for the training. Allowed values: 0 < x <= 1

sustain_track

bool If TRUE energy consumption is tracked during training via the python library 'codecarbon'.

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any

sustain_region

string Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any

sustain_interval

int Interval in seconds for measuring power usage. Allowed values: 1 <= x

sustain_log_level

string Level for printing information to the console. Allowed values: 'debug', 'info', 'warning', 'error', 'critical'

trace

bool TRUE if information about the estimation phase should be printed to the console.

pytorch_trace

int ml_trace=0 does not print any information about the training process from pytorch on the console. Allowed values: 0 <= x <= 1

log_dir

string Path to the directory where the log files should be saved. If no logging is desired set this argument to NULL. Allowed values: any

log_write_interval

int Time in seconds determining the interval in which the logger should try to update the log files. Only relevant if log_dir is not NULL. Allowed values: 1 <= x

Returns

Does nothing return.


Method count_parameter()

Method for counting the trainable parameters of a model.

Usage
BaseModelCore$count_parameter()
Returns

Returns the number of trainable parameters of the model.


Method plot_training_history()

Method for requesting a plot of the training history. This method requires the R package 'ggplot2' to work.

Usage
BaseModelCore$plot_training_history(
  x_min = NULL,
  x_max = NULL,
  y_min = NULL,
  y_max = NULL,
  ind_best_model = TRUE,
  text_size = 10L
)
Arguments
x_min

int Minimal value for x-axis. Set to NULL for an automatic adjustment. Allowed values: x

x_max

int Maximal value for x-axis. Set to NULL for an automatic adjustment. Allowed values: x

y_min

int Minimal value for y-axis. Set to NULL for an automatic adjustment. Allowed values: x

y_max

int Maximal value for y-axis. Set to NULL for an automatic adjustment. Allowed values: x

ind_best_model

bool If TRUE the plot indicates the best states of the model according to the chosen measure.

text_size

int Size of text elements. Allowed values: 1 <= x

Returns

Returns a plot of class ggplot visualizing the training process.


Method get_special_tokens()

Method for receiving the special tokens of the model

Usage
BaseModelCore$get_special_tokens()
Returns

Returns a matrix containing the special tokens in the rows and their type, token, and id in the columns.


Method get_tokenizer_statistics()

Tokenizer statistics

Usage
BaseModelCore$get_tokenizer_statistics()
Returns

Returns a data.frame containing the tokenizer's statistics.


Method fill_mask()

Method for calculating tokens behind mask tokens.

Usage
BaseModelCore$fill_mask(masked_text, n_solutions = 5L)
Arguments
masked_text

string Text with mask tokens. Allowed values: any

n_solutions

int Number of solutions the model should predict. Allowed values: 1 <= x

Returns

Returns a list containing a data.frame for every mask. The data.frame contains the solutions in the rows and reports the score, token id, and token string in the columns.


Method save()

Method for saving a model on disk.

Usage
BaseModelCore$save(dir_path, folder_name)
Arguments
dir_path

Path to the directory where to save the object.

folder_name

string Name of the folder where the model should be saved. Allowed values: any

Returns

Function does nothing return. It is used to save an object on disk.


Method load_from_disk()

Loads an object from disk and updates the object to the current version of the package.

Usage
BaseModelCore$load_from_disk(dir_path)
Arguments
dir_path

Path where the object set is stored.

Returns

Function does nothin return. It loads an object from disk.


Method get_model()

Get 'PyTorch' model

Usage
BaseModelCore$get_model()
Returns

Returns the underlying 'PyTorch' model.


Method get_model_type()

Type of the underlying model.

Usage
BaseModelCore$get_model_type()
Returns

Returns a string describing the model's architecture.


Method get_final_size()

Size of the final layer.

Usage
BaseModelCore$get_final_size()
Returns

Returns an int describing the number of dimensions of the last hidden layer.


Method get_n_layers()

Number of layers.

Usage
BaseModelCore$get_n_layers()
Returns

Returns an int describing the number of layers available for embedding.


Method get_flops_estimates()

Flop estimates

Usage
BaseModelCore$get_flops_estimates()
Returns

Returns a data.frame containing statistics about the flops.


Method set_publication_info()

Method for setting the bibliographic information of the model.

Usage
BaseModelCore$set_publication_info(type, authors, citation, url = NULL)
Arguments
type

string Type of information which should be changed/added. developer, and modifier are possible.

authors

List of people.

citation

string Citation in free text.

url

string Corresponding URL if applicable.

Returns

Function does not return a value. It is used to set the private members for publication information of the model.


Method estimate_sustainability_inference_fill_mask()

Calculates the energy consumption for inference of the given task.

Usage
BaseModelCore$estimate_sustainability_inference_fill_mask(
  text_dataset = NULL,
  n_samples = NULL,
  sustain_iso_code = NULL,
  sustain_region = NULL,
  sustain_interval = 15L,
  sustain_log_level = "warning",
  trace = TRUE
)
Arguments
text_dataset

LargeDataSetForText LargeDataSetForText Object storing textual data.

n_samples

int Number of samples. Allowed values: 1 <= x

sustain_iso_code

string ISO code (Alpha-3-Code) for the country. This variable must be set if sustainability should be tracked. A list can be found on Wikipedia: https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes. Allowed values: any

sustain_region

string Region within a country. Only available for USA and Canada See the documentation of codecarbon for more information. https://mlco2.github.io/codecarbon/parameters.html Allowed values: any

sustain_interval

int Interval in seconds for measuring power usage. Allowed values: 1 <= x

sustain_log_level

string Level for printing information to the console. Allowed values: 'debug', 'info', 'warning', 'error', 'critical'

trace

bool TRUE if information about the estimation phase should be printed to the console.

Returns

Returns nothing. Method saves the statistics internally. The statistics can be accessed with the method get_sustainability_data("inference")


Method calc_flops_architecture_based()

Calculates FLOPS based on model's architecture.

Usage
BaseModelCore$calc_flops_architecture_based(batch_size, n_batches, n_epoch)
Arguments
batch_size

int Size of the batches for training. Allowed values: 1 <= x

n_batches

int Number of batches. Allowed values: 1 <= x

n_epoch

int Number of training epochs. Allowed values: 1 <= x

Returns

Returns a data.frame storing the estimates.


Method clone()

The objects of this class are cloneable with this method.

Usage
BaseModelCore$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other R6 Classes for Developers: AIFEBaseModel, AIFEMaster, ClassifiersBasedOnTextEmbeddings, DataManagerClassifier, LargeDataSetBase, ModelsBasedOnTextEmbeddings, TEClassifiersBasedOnProtoNet, TEClassifiersBasedOnRegular, TokenizerBase


aifeducation documentation built on Nov. 19, 2025, 5:08 p.m.