policytree-package: policytree: Policy Learning via Doubly Robust Empirical...

policytree-packageR Documentation

policytree: Policy Learning via Doubly Robust Empirical Welfare Maximization over Trees

Description

A package for learning simple rule-based policies, where the rule takes the form of a shallow decision tree. Applications include settings which require interpretable predictions, such as for example a medical treatment prescription. This package uses doubly robust reward estimates from grf to find a shallow, but globally optimal decision tree.

Some helpful links for getting started:

Author(s)

Maintainer: Erik Sverdrup erikcs@stanford.edu

Authors:

  • Ayush Kanodia

  • Zhengyuan Zhou

  • Susan Athey

  • Stefan Wager

See Also

Useful links:

Examples


# Multi-action policy learning example.
n <- 250
p <- 10
X <- matrix(rnorm(n * p), n, p)
W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE))
Y <- X[, 1] + X[, 2] * (W == "B") + X[, 3] * (W == "C") + runif(n)
multi.forest <- grf::multi_arm_causal_forest(X, Y, W)

# Compute doubly robust reward estimates.
Gamma.matrix <- double_robust_scores(multi.forest)

# Fit a depth 2 tree on a random training subset.
train <- sample(1:n, 200)
opt.tree <- policy_tree(X[train, ], Gamma.matrix[train, ], depth = 2)
opt.tree

# Predict treatment on held out data.
predict(opt.tree, X[-train, ])



policytree documentation built on July 9, 2023, 6:30 p.m.