cl.perf: Assess Classification Performances

Description Usage Arguments Details Value Note Author(s) References Examples

View source: R/mt_accest.R

Description

Assess classification performances.

Usage

1
2
3
4
  cl.rate(obs, pre)
  cl.perf(obs, pre, pos=levels(as.factor(obs))[2])
  cl.roc(stat, label, pos=levels(as.factor(label))[2], plot=TRUE, ...)
  cl.auc(stat, label, pos=levels(as.factor(label))[2])

Arguments

obs

Factor or vector of observed class.

pre

Factor or vector of predicted class.

stat

Factor or vector of statistics for positives/cases.

label

Factor or vector of label for categorical data.

pos

Characteristic string for positive.

plot

Logical flag indicating whether ROC should be plotted.

...

Further arguments for plotting.

Details

cl.perf gets the classification performances such as accuracy rate and false positive rate. cl.roc computes receiver operating characteristics (ROC). cl.auc calculates area under ROC curve. Three functions are only for binary class problems.

Value

cl.rate returns a list with components:

acc

Accuracy rate of classification.

err

Error rate of classification.

con.mat

Confusion matrix.

kappa

Kappa Statistics.

cl.perf returns a list with components:

acc

Accuracy rate

tpr

True positive rate

fpr

False positive rate

sens

Sensitivity

spec

Specificity

con.mat

Confusion matrix.

kappa

Kappa Statistics.

positive

Positive level.

cl.roc returns a list with components:

perf

A data frame of acc, tpr,fpr,sens, spec and cutoff (thresholds).

auc

Area under ROC curve

positive

Positive level.

cl.auc returns a scalar value of AUC.

Note

AUC varies between 0.5 and 1.0 for sensible models; the higher the better. If it is less than 0.5, it should be corrected by 1 - AUC. Or re-run it by using 1 - stat.

Author(s)

Wanchang Lin

References

Fawcett, F. (2006) An introduction to ROC analysis. Pattern Recognition Letters. vol. 27, 861-874.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
## Measurements of Forensic Glass Fragments
library(MASS)
data(fgl, package = "MASS")    # in MASS package
dat <- subset(fgl, grepl("WinF|WinNF",type))
## dat <- subset(fgl, type %in% c("WinF", "WinNF"))
x   <- subset(dat, select = -type)
y   <- factor(dat$type)

## construct train and test data 
idx   <- sample(1:nrow(x), round((2/3)*nrow(x)), replace = FALSE) 
tr.x  <- x[idx,]
tr.y  <- y[idx]
te.x  <- x[-idx,]        
te.y  <- y[-idx] 

model <- lda(tr.x, tr.y)

## predict the test data results
pred  <- predict(model, te.x)

## classification performances
obs <- te.y
pre <- pred$class   
cl.rate(obs, pre)
cl.perf(obs, pre, pos="WinNF")
## change positive as "WinF"
cl.perf(obs, pre, pos="WinF")

## ROC and AUC
pos  <- "WinNF"            ## or "WinF"
stat <- pred$posterior[,pos]
## levels(obs) <- c(0,1)

cl.auc (stat,obs, pos=pos)
cl.roc (stat,obs, pos=pos)

## test examples for ROC and AUC
label <- rbinom(30,size=1,prob=0.2)
stat  <- rnorm(30)
cl.roc(stat,label, pos=levels(factor(label))[2],plot = TRUE)
cl.auc(stat,label,pos=levels(factor(label))[2])

## if auc is less than 0.5, it should be adjusted by 1 - auc. 
## Or re-run them:
cl.roc(1 - stat,label, pos=levels(factor(label))[2],plot = TRUE)
cl.auc(1 - stat,label,pos=levels(factor(label))[2])

mt documentation built on Feb. 2, 2022, 1:07 a.m.

Related to cl.perf in mt...