Description Usage Arguments Value Author(s) References Examples
mtr_cohen_kappa
computes Kappa statistic which measures inter-rater
agreement for categorical items.
The form of Kappa statistic is: Kappa = (O - E) / (1 - E) where O is the observed accuracy and E is the expected accuracy based on the marginal total of the confusion matrix. The statistic can take on values between -1 and 1; a value of 0 means there is no agreement between the actual and the predicted, while a value of 1 indicates perfect concordance of the model prediction and the observed classes.
1 | mtr_cohen_kappa(actual, predicted, cutoff = 0.5)
|
actual |
|
predicted |
|
cutoff |
|
A numeric scalar output
An Chu
Max Kuhn and Kjell Johnson, Applied Predictive Modeling (New York: Springer-Verlag, 2013)
"Classification - Cohen’s Kappa in Plain English, Cross Validated"
1 2 3 4 5 6 7 8 9 10 | act <- c(1, 0, 1, 0, 1)
pred <- c(0.1, 0.9, 0.3, 0.5, 0.2)
mtr_cohen_kappa(act, pred)
set.seed(2093)
pred <- runif(1000)
act <- round(pred)
pred[sample(1000, 300)] <- runif(300) # noises
mtr_cohen_kappa(act, pred)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.