Description Usage Arguments Details Value Author(s) Examples
This function returns common statistical validity metrics used for testing the discriminative power and calibration of a predictive model.
1 | modelValidity(data,model,class, train = FALSE, calib.graph=FALSE)
|
data |
a data frame contining the data to be tested on the model |
model |
the model to be tested |
class |
the name (as a string) of the outcome variable |
train |
a logical value indicating if the data comes from the same dataset used to train the model. Defaults is FALSE |
calib.graph |
a logical value indicating if the calibration graph has to be generated. Deafault is FALSE |
The modelValidity function returns a summary table with the validity metrics most commonly used in predictive modeling.
A character matrix containing the following statistical metrics:
auc |
the area under the curve of the model |
cimin |
the lower 95% confidence interval of the auc |
cimax |
the higher 95% confidence interval of the auc |
SRME |
the squre root mean error |
precision |
the precision of the model. This is also known in epidemiology as the positive predictive value (PPV) |
recall |
the recall of the model. Also known in epidemilogy as the sensitivity |
fscore |
the armonic mean of precision and recall (F1-score). |
NPV |
the negative predictive value |
D |
the Tjur\'s discriminative measure |
TN |
the true negative value |
mmce |
the mean missclassification error |
Hosmer_Lemeshow |
the Hosmer-Lemeshow index |
Tomas Karpati M.D.
1 2 3 4 5 6 7 8 | set.seed(123)
n <- 100
x <- rnorm(n)
xb <- x
pr <- exp(xb)/(1+exp(xb))
y <- 1*(runif(n) < pr)
mod <- glm(y~x, family="binomial")
vt <- modelValidity(data.frame(x=x,y=y),mod,"y")
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.