evaluate: Evaluate Causal Graph Estimates

View source: R/metrics.R

evaluateR Documentation

Evaluate Causal Graph Estimates

Description

Computes various metrics to evaluate the difference between estimated and truth causal graph. Designed primarily for assessing the performance of causal discovery algorithms.

Metrics are supplied as a list with three slots: $adj, $dir, and $other.

$adj

Metrics applied to the adjacency confusion matrix (see confusion()).

$dir

Metrics applied to the conditional orientation confusion matrix (see confusion()).

$other

Metrics applied directly to the adjacency matrices without computing confusion matrices.

Adjacency confusion matrix and conditional orientation confusion matrix only supports caugi::caugi objects whose edges are restricted to ⁠-->⁠, ⁠<->⁠, ⁠---⁠, or absence of an edge.

Usage

evaluate(truth, est, metrics = "all")

Arguments

truth

truth caugi::caugi object.

est

Estimated caugi::caugi object.

metrics

List of metrics, see details. If metrics = "all", all available metrics are computed.

Value

A data.frame with one column for each computed metric. Adjacency metrics are prefixed with "adj_", orientation metrics are prefixed with "dir_", other metrics do not get a prefix.

See Also

Other metrics: confusion(), f1_score(), false_omission_rate(), fdr(), g1_score(), npv(), precision(), recall(), reexports, specificity()

Examples

cg1 <- caugi::caugi(A %-->% B + C)
cg2 <- caugi::caugi(B %-->% A + C)
evaluate(cg1, cg2)
evaluate(
  cg1,
  cg2,
  metrics = list(
    adj = c("precision", "recall"),
    dir = c("f1_score"),
    other = c("shd")
  )
)


causalDisco documentation built on April 13, 2026, 5:06 p.m.