Description Usage Arguments Details Value See Also
Evaluate the performance of the prediction using different criteria.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | evalPred(object, test.Y, cutoff = 0.5, type = c("Binary", "Continuous")[1],
output = ifelse(type == "Continuous", list("mse"), list(c("acc", "sens",
"spec", "auc", "ppv", "npv", "w.acc", "a.acc", "val.f")))[[1]],
weight = rep(1, length(test.Y)), inclusion = rep(TRUE, length(test.Y)),
Y.val = test.Y, Y.resp = rep(1, length(test.Y)))
## S4 method for signature 'PredObj'
evalPred(object, test.Y, cutoff = 0.5,
type = c("Binary", "Continuous")[1], output = ifelse(type == "Continuous",
list("mse"), list(c("acc", "sens", "spec", "auc", "ppv", "npv", "w.acc",
"a.acc", "val.f")))[[1]], weight = rep(1, length(test.Y)),
inclusion = rep(TRUE, length(test.Y)), Y.val = test.Y, Y.resp = rep(1,
length(test.Y)))
## S4 method for signature 'ListPredObj'
evalPred(object, test.Y, cutoff, type, output, weight,
inclusion, Y.val)
|
object |
A |
test.Y |
The true response. |
cutoff |
A cutoff for binary response. |
type |
The type of the response, either "Binary" or "Continuous". |
output |
The output types. See details. |
weight |
Sample weight for weighted accuracy. |
inclusion |
Used in thresholded accuracy. |
Y.val |
Used in the value function. |
acc: Accuracy sens: sensitivity spec: specificity auc: area under the curve ppv: positive predictive value npv: negative predictive value w.acc: weighted accuracy — w.acc = sum((Y==pred.Y)*weight)/sum(weight) a.acc: thresholded accuracy — a.acc = sum((Y==pred.Y)*inclusion)/sum(inclusion) val.f: value function — val.f = sum((Y.val==pred.Y)*weight)/sum(weight)
A data.frame
of performance will be retured.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.