Introduction to TrustworthyMLR

knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)

Why TrustworthyMLR?

In modern Machine Learning, we often optimize for performance metrics like Accuracy or AUC. However, in sensitive domains like healthcare or finance, reliability is just as important. A model that changes its predictions significantly based on a small change in the training data or a tiny amount of input noise is not "trustworthy."

TrustworthyMLR provides tools to quantify these dimensions of reliability.

Installation

devtools::install_github("ahamza-msse25mcs/TrustworthyMLR")

Core Metrics

1. Stability Index

The Stability Index measures how consistent a model is across different training runs or resamples. An index of 1.0 means the model is perfectly stable.

library(TrustworthyMLR)

# Simulate 5 runs of predictions
set.seed(42)
base <- rnorm(100)
preds <- matrix(rep(base, 5) + rnorm(500, sd = 0.1), ncol = 5)

# Calculate Stability
stability_index(preds)

# Visualize Stability
plot_stability(preds, main = "Prediction Stability Across Runs")

2. Robustness Score

The Robustness Score measures how sensitive a model is to input perturbations (noise).

# Define a simple linear model
predict_fn <- function(X) X %*% c(1, -2, 3)

# Data
X <- matrix(rnorm(300), ncol = 3)

# Robustness Score
robustness_score(predict_fn, X, noise_level = 0.05)

# Visualize Robustness Decay
plot_robustness(predict_fn, X, main = "Robustness Decay Curve")

Conclusion

By incorporating TrustworthyMLR into your validation pipeline, you ensure that your models are not only accurate but also reliable and robust enough for real-world deployment.



Try the TrustworthyMLR package in your browser

Any scripts or data that you put into this service are public.

TrustworthyMLR documentation built on Feb. 20, 2026, 5:09 p.m.