README.md

metrics

Why another package for evaluating machine learning models?

Because I believe there’s still a niche for an R package that have all the following traits in one place:

Why do I think so? While doing my evaluation work on a machine learning project I found that there’s no single R package is on a par with scikit-learn’s metrics module in term of coverage, ease of use, throughout testing and documentaion richness. I’m not saying that these packages are terrible, they’re designed for a very specific use case(s)/problem(s) with varying quality.

Overview of metrics

Installation

Install the stable version of metrics from CRAN:

install.packages("metrics")

Or install the development version from Github with:

devtools::install_github("chuvanan/metrics")

Getting started

All metrics functions share the same interface: mtr_fun(actual, predicted) which is applicable to both classification and regression settings.

Here’s a quick example:

library(metrics)

## simulate sample data set
set.seed(123)
preds <- runif(1000)
truth <- round(preds)
preds[sample(1000, 300)] <- runif(300) # noise

## overall accuracy
mtr_accuracy(truth, preds)              # default threshold is 0.5

## [1] 0.838

## precision
mtr_precision(truth, preds)

## [1] 0.82643

## recall
mtr_recall(truth, preds)

## [1] 0.8498986

## AUROC
mtr_auc_roc(truth, preds)

## [1] 0.8260939


chuvanan/metrics documentation built on Nov. 4, 2019, 8:52 a.m.