Description Usage Arguments Details Value See Also Examples
Having the out-of-bag prediction results in a tidy, tabular format makes visualization much easier.
1 |
fit |
A trained causal forest object from
|
preds |
Out-of-bag training predictions from |
debiased.error and excess.error serve to partition the overall prediction
error into two parts. debiased.error is "irreducible" error in a sense because it
cannot be made smaller by increasing the number of trees in the forest. excess.error
can, however. The grf authors recommend growing
enough trees that excess.error becomes negligible.
A tibble containing the following columns:
WThe original treatment assignments.
W.hatThe estimated treatment propensities: W.hat = E[W | X].
YThe original outcome variable.
Y.hatThe expected response estimates, marginalized over treatment: Y.hat = E[Y | X].
treatmentThe treatment
assignments as a factor, "Control" or "Treated". This looks better in plots
than W does.
cateThe conditional average treatment effect (CATE) estimates
cate.seThe standard errors of the CATEs.
debiased.errorAn estimate of the error obtained if the forest had an infinite number of trees.
excess.errorA jackknife estimate of how unstable the estimates are if forests of the same size were repeatedly grown on the same data set.
IPWThe inverse propensity weights: 1 / W.hat if W = 1, 1 / (1 - W.hat) otherwise.
biasA measure of each observation's contribution to the overall bias of the model, relative to a simple difference in means.
https://grf-labs.github.io/grf/articles/diagnostics.html#assessing-fit for a discussion of the bias measure and how it is calculated.
1 2 3 4 5 6 7 8 9 10 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.