modelValidity: Return discriminative and calibration measures for predictive...

Description Usage Arguments Details Value Author(s) Examples

Description

This function returns common statistical validity metrics used for testing the discriminative power and calibration of a predictive model.

Usage

1
modelValidity(data,model,class, train = FALSE, calib.graph=FALSE)

Arguments

data

a data frame contining the data to be tested on the model

model

the model to be tested

class

the name (as a string) of the outcome variable

train

a logical value indicating if the data comes from the same dataset used to train the model. Defaults is FALSE

calib.graph

a logical value indicating if the calibration graph has to be generated. Deafault is FALSE

Details

The modelValidity function returns a summary table with the validity metrics most commonly used in predictive modeling.

Value

A character matrix containing the following statistical metrics:

auc

the area under the curve of the model

cimin

the lower 95% confidence interval of the auc

cimax

the higher 95% confidence interval of the auc

SRME

the squre root mean error

precision

the precision of the model. This is also known in epidemiology as the positive predictive value (PPV)

recall

the recall of the model. Also known in epidemilogy as the sensitivity

fscore

the armonic mean of precision and recall (F1-score).

NPV

the negative predictive value

D

the Tjur\'s discriminative measure

TN

the true negative value

mmce

the mean missclassification error

Hosmer_Lemeshow

the Hosmer-Lemeshow index

Author(s)

Tomas Karpati M.D.

Examples

1
2
3
4
5
6
7
8
set.seed(123)
n <- 100
x <- rnorm(n)
xb <- x
pr <- exp(xb)/(1+exp(xb))
y <- 1*(runif(n) < pr)
mod <- glm(y~x, family="binomial")
vt <- modelValidity(data.frame(x=x,y=y),mod,"y")

mechkar documentation built on March 13, 2020, 2:30 a.m.