CalculatePref: Calculate Performance Metrics

View source: R/BinomialModel.R

CalculatePrefR Documentation

Calculate Performance Metrics

Description

Computes performance metrics such as true positive rate (TPR) and false positive rate (FPR) for model predictions. This function uses the ROCR package to evaluate model effectiveness at different thresholds, helping to assess the discriminative ability of the model under various regularization strengths specified by 's'.

Usage

CalculatePref(model, newx, s, acture.y)

Arguments

model

A model object used to generate predictions.

newx

A matrix or data frame of new data on which the model will predict.

s

A character vector or single character string indicating the regularization strength at which the model should be evaluated. Common values are 'lambda.min' and 'lambda.1se' for models fitted with methods such as glmnet.

acture.y

A vector containing the actual binary outcomes (0 or 1) corresponding to 'newx'.

Value

A performance object from the ROCR package, which includes true positive and false positive rates.

Examples

# Assuming 'model', 'new_data', and 'actual_outcomes' are predefined:
perf_metrics <- CalculatePref(model = fitted_model, newx = new_data, s = "lambda.min", acture.y = actual_outcomes)
print(perf_metrics)

IOBR/IOBR documentation built on Nov. 13, 2024, 5:22 a.m.