| evaluate | R Documentation |
Computes various metrics to evaluate the difference between estimated and truth causal graph. Designed primarily for assessing the performance of causal discovery algorithms.
Metrics are supplied as a list with three slots: $adj, $dir, and $other.
$adjMetrics applied to the adjacency confusion matrix (see confusion()).
$dirMetrics applied to the conditional orientation confusion matrix (see confusion()).
$otherMetrics applied directly to the adjacency matrices without computing confusion matrices.
Adjacency confusion matrix and conditional orientation confusion matrix only supports
caugi::caugi objects whose edges are restricted to -->, <->, ---, or absence of an edge.
evaluate(truth, est, metrics = "all")
truth |
truth caugi::caugi object. |
est |
Estimated caugi::caugi object. |
metrics |
List of metrics, see details. If |
A data.frame with one column for each computed metric. Adjacency metrics are prefixed with "adj_", orientation metrics are prefixed with "dir_", other metrics do not get a prefix.
Other metrics:
confusion(),
f1_score(),
false_omission_rate(),
fdr(),
g1_score(),
npv(),
precision(),
recall(),
reexports,
specificity()
cg1 <- caugi::caugi(A %-->% B + C)
cg2 <- caugi::caugi(B %-->% A + C)
evaluate(cg1, cg2)
evaluate(
cg1,
cg2,
metrics = list(
adj = c("precision", "recall"),
dir = c("f1_score"),
other = c("shd")
)
)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.