X_RF_autotune_gpp: Gaussian Process optimization for the X-Learner with honest...

Description Usage Arguments Details Value See Also Examples

View source: R/Xhrf_autotune_gpp.R

Description

X_RF_autotune_gpp will first go through 11 example setups which have proven to be very good parameters in some cases we have studied before. After that 'init_points' many points completely at random and evaluates those. After that it uses the previous observations to initialize a gaussian process prior and it makes n_iter many updates using this GP potimization

Usage

1
2
X_RF_autotune_gpp(feat, tr, yobs, ntree = 2000, init_points = 20,
  n_iter = 100, nthread = 0, verbose = TRUE, ...)

Arguments

feat

A data frame of all the features.

tr

A numeric vector contain 0 for control and 1 for treated variables.

yobs

A numeric vector containing the observed outcomes.

ntree

Number of trees for each of the base learners.

init_points

Number of completely randomly selected tuning settings.

n_iter

Number of updates updates to optimize the GPP.

nthread

Number of threads used. Set it is 0, to automatically select the maximum amount of possible threads. Set it 1 for slowest performance but absolute deterministic behavior.

Details

This function uses the rBayesianOptimization package to do the baysian optimization

Value

A tuned X learner object.

See Also

X_RF_autotune_simple, X_RF_autotune_hyperband,

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  set.seed(14236142)
  feat <- iris[, -1]
  tr <- rbinom(nrow(iris), 1, .5)
  yobs <- iris[, 1]
  # train a
  xl_gpp <- X_RF_autotune_gpp(feat, tr, yobs, ntree = 100, nthread = 0,
  verbose = FALSE, init_points = 5, n_iter = 1)
  # computes
  EstimateCate(xl_gpp, feat)
  CateCI(xl_gpp, feat, B = 5, verbose = FALSE)

soerenkuenzel/hte documentation built on June 12, 2018, 4:26 p.m.