eval.stats: Predictive Modelling Evaluation Statistics

View source: R/evalStats.R

eval.statsR Documentation

Predictive Modelling Evaluation Statistics

Description

Evaluation statistics including standard and non-standard evaluation metrics. Returns a structure of data containing the results of several evaluation metrics (both standard and some focused on the imbalanced regression problem).

Usage

eval.stats(formula, train, test, y_pred, phi.parms = NULL, cf = 1.5)

Arguments

formula

A model formula

train

A data.frame object with the training data

test

A data.frame object with the test set

y_pred

A vector with the predictions of a given model

phi.parms

The relevance function providing the data points where the pairs of values-relevance are known (use ?phi.control() for more information). If this parameter is not defined, this method will create a relevance function based on the data.frame variable in parameter train. Default is NULL

cf

The coefficient used to calculate the boxplot whiskers in the event that a relevance function is not provided (parameter phi.parms)

Value

A list with four slots for the results of standard and relevance-based evaluation metrics

overall

Results for standard metrics MAE, MSE and RMSE, along with Pearson's Correlation, bias, variance and the Squared Error Relevance Area metric.

Examples

library(IRon)

if(requireNamespace("earth")) {

   data(accel)

   form <- acceleration ~ .

   ind <- sample(1:nrow(accel),0.75*nrow(accel))

   train <- accel[ind,]
   test <- accel[-ind,]

   ph <- phi.control(accel$acceleration)

   m <- earth::earth(form, train)
   preds <- as.vector(predict(m,test))

   eval.stats(form, train, test, preds)
   eval.stats(form, train, test, preds, ph)
   eval.stats(form, train, test, preds, ph, cf=3) # Focusing on extreme outliers

}



nunompmoniz/IRon documentation built on April 24, 2023, 1:20 p.m.