accuracy | R Documentation |
accuracy()
counts the number of true positive,
false positive, true negative, and false negative cases
for each predicted class and calculates precision, recall and F1 score
based on these counts.
summary()
calculates micro-average precision and recall, and
macro-average precision and recall based on the output of
accuracy()
.
accuracy(x, y)
## S3 method for class 'textmodel_wordmap_accuracy'
summary(object, ...)
x |
vector of predicted classes. |
y |
vector of true classes. |
object |
output of |
... |
not used. |
accuracy()
returns a data.frame with following columns:
tp |
the number of true positive cases. |
fp |
the number of false positive cases. |
tn |
the number of true negative cases. |
fn |
the number of false negative cases. |
precision |
|
recall |
|
f1 |
the harmonic mean of precision and recall. |
summary()
returns a named numeric vector with the following elements:
p |
micro-average precision. |
r |
micro-average recall |
P |
macro-average precision. |
R |
macro-average recall. |
class_pred <- c('US', 'GB', 'US', 'CN', 'JP', 'FR', 'CN') # prediction
class_true <- c('US', 'FR', 'US', 'CN', 'KP', 'EG', 'US') # true class
acc <- accuracy(class_pred, class_true)
print(acc)
summary(acc)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.