Description Usage Arguments Value Examples
Generates confusion matrix for a specified probability threshold. Also computes the following metrics - Accuracy, True Positive Rate, False Positive Rate & Precision. Multiple models can be passed as arguments to this function
1 | staticConfMatrix(list_models, t, reps = NULL, reps.all.unique = F)
|
list_models |
A list of one (or more) dataframes for each model whose performance is to be evaluated. Each dataframe should comprise of 2 columns with the first column indicating the class labels (0 or 1) and the second column providing the raw predicted probabilities |
t |
Probability threshold value |
reps |
Performance measures derived from the confusion matrix (Accuracy, TPR, FPR & Precision) are computed for a range of different probability thresholds. The "reps" argument controls the number of different probability thresholds considered (threshold range given by the sequence - seq(0,1,1/reps)) |
reps.all.unique |
Logical; If set to True, Performance measures are computed for each unique Probability value |
If reps = NULL, the output will be a list with 2 components - a confusion matrix dataframe and a dataframe with the values of the computed metrics (Accuracy,TPR,FPR,Precision). If reps argument is supplied, an additional dataframe containing the metrics values for different probability thresholds is included in the output
1 2 3 4 5 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.