tboost: T-learner, implemented via xgboost (gradient boosting)

Description Usage Arguments Examples

View source: R/tboost.R

Description

T-learner learns the treated and control expected outcome respectively by fitting two separate models.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
tboost(
  x,
  w,
  y,
  k_folds_mu1 = NULL,
  k_folds_mu0 = NULL,
  ntrees_max = 1000,
  num_search_rounds = 10,
  print_every_n = 100,
  early_stopping_rounds = 10,
  nthread = NULL,
  verbose = FALSE
)

Arguments

x

the input features

w

the treatment variable (0 or 1)

y

the observed response (real valued)

k_folds_mu1

number of folds for cross validation for the treated

k_folds_mu0

number of folds for cross validation for the control

ntrees_max

the maximum number of trees to grow for xgboost

num_search_rounds

the number of random sampling of hyperparameter combinations for cross validating on xgboost trees

print_every_n

the number of iterations (in each iteration, a tree is grown) by which the code prints out information

early_stopping_rounds

the number of rounds the test error stops decreasing by which the cross validation in finding the optimal number of trees stops

nthread

the number of threads to use. The default is NULL, which uses all available threads

verbose

boolean; whether to print statistic

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
## Not run: 
n = 100; p = 10

x = matrix(rnorm(n*p), n, p)
w = rbinom(n, 1, 0.5)
y = pmax(x[,1], 0) * w + x[,2] + pmin(x[,3], 0) + rnorm(n)

tboost_fit = tboost(x, w, y)
tboost_est = predict(tboost_fit, x)

## End(Not run)

xnie/rlearner documentation built on April 11, 2021, 12:49 a.m.