perf_eval: Compute performance of diffusion scores on a single case

Description Usage Arguments Value Examples

View source: R/perf_eval.R

Description

Function perf_eval directly compares a desired output with the scores from diffusion. It handles the possible shapes of the scores (vector, matrix, list of matrices) and gives the desired metrics.

Usage

1
2
3
4
5
perf_eval(
    prediction,
    validation,
    metric = list(auc = metric_fun(curve = "ROC"))
)

Arguments

prediction

smoothed scores; either a named numeric vector, a column-wise matrix whose rownames are nodes and colnames are different scores, or a named list of such matrices.

validation

target scores to which the smoothed scores will be compared to. Must have the same format as the input scores, although the number of rows may vary and only the matching rows will give a performance measure.

metric

named list of metrics to apply. Each metric should accept the form f(actual, predicted)

Value

A data frame containing the metrics for each comparable pair of output-validation.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Using a matrix with four set of scores
# called Single, Row, Small_sample, Large_sample
data(graph_toy)
diff <- diffuse(
    graph = graph_toy,
    scores = graph_toy$input_mat,
    method = "raw")
df_perf <- perf_eval(
    prediction = diff,
    validation = graph_toy$input_mat)
df_perf

b2slab/diffuStats documentation built on Feb. 26, 2021, 2 p.m.