softtbart1: Type I Tobit Soft Bayesian Additive Regression Trees with...

View source: R/softtbart1.R

softtbart1R Documentation

Type I Tobit Soft Bayesian Additive Regression Trees with sparsity inducing hyperprior implemented using MCMC

Description

Type I Tobit Soft Bayesian Additive Regression Trees with sparsity inducing hyperprior implemented using MCMC

Usage

softtbart1(
  x.train,
  x.test,
  y,
  n.iter = 1000,
  n.burnin = 100,
  below_cens = 0,
  above_cens = Inf,
  n.trees = 50L,
  SB_group = NULL,
  SB_alpha = 1,
  SB_beta = 2,
  SB_gamma = 0.95,
  SB_k = 2,
  SB_sigma_hat = NULL,
  SB_shape = 1,
  SB_width = 0.1,
  SB_alpha_scale = NULL,
  SB_alpha_shape_1 = 0.5,
  SB_alpha_shape_2 = 1,
  SB_tau_rate = 10,
  SB_num_tree_prob = NULL,
  SB_temperature = 1,
  SB_weights = NULL,
  SB_normalize_Y = TRUE,
  print.opt = 100,
  fast = TRUE,
  censsigprior = FALSE,
  lambda0 = NA,
  sigest = NA,
  nolinregforsigest = FALSE
)

Arguments

x.train

The training covariate data for all training observations. Number of rows equal to the number of observations. Number of columns equal to the number of covariates.

x.test

The test covariate data for all test observations. Number of rows equal to the number of observations. Number of columns equal to the number of covariates.

y

The training data vector of outcomes. A continuous, censored outcome variable.

n.iter

Number of iterations excluding burnin.

n.burnin

Number of burnin iterations.

below_cens

Number at or below which observations are censored.

above_cens

Number at or above which observations are censored.

n.trees

A positive integer giving the number of trees used in the sum-of-trees formulation.

print.opt

Print every print.opt number of Gibbs samples.

fast

If equal to TRUE, then implements faster truncated normal draws and approximates normal pdf.

Value

The following objects are returned:

Z.matcens

Matrix of draws of latent (censored) outcomes for censored observations. Number of rows equals number of censored training observations. Number of columns equals n.iter . Rows are ordered in order of censored observations in the training data.

Z.matcensbelow

Matrix of draws of latent (censored) outcomes for observations censored from below. Number of rows equals number of training observations censored from below. Number of columns equals n.iter . Rows are ordered in order of censored observations in the training data.

Z.matcensabove

Matrix of draws of latent (censored) outcomes for observations censored from above. Number of rows equals number of training observations censored from above. Number of columns equals n.iter . Rows are ordered in order of censored observations in the training data.

mu

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all training observations. Number of rows equals number of training observations. Number of columns equals n.iter .

mucens

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all censored training observations. Number of rows equals number of censored training observations. Number of columns equals n.iter .

muuncens

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all uncensored training observations. Number of rows equals number of uncensored training observations. Number of columns equals n.iter .

mucensbelow

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all training observations censored from below. Number of rows equals number of training observations censored from below. Number of columns equals n.iter .

mucensabove

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all training observations censored from above Number of rows equals number of training observations censored from above Number of columns equals n.iter .

ystar

Matrix of training sample draws of the outcome assuming uncensored (can take values below below_cens and above above_cens. Number of rows equals number of training observations. Number of columns equals n.iter .

ystarcens

Matrix of censored training sample draws of the outcome assuming uncensored (can take values below below_cens and above above_cens. Number of rows equals number of censored training observations. Number of columns equals n.iter .

ystaruncens

Matrix of uncensored training sample draws of the outcome assuming uncensored (can take values below below_cens and above above_cens. Number of rows equals number of uncensored training observations. Number of columns equals n.iter .

ystarcensbelow

Matrix of censored from below training sample draws of the outcome assuming uncensored (can take values below below_cens and above above_cens. Number of rows equals number of training observations censored from below. Number of columns equals n.iter .

ystarcensabove

Matrix of censored from above training sample draws of the outcome assuming uncensored (can take values below below_cens and above above_cens. Number of rows equals number of training observations censored from above. Number of columns equals n.iter .

test.mu

Matrix of draws of the sum of terminal nodes, i.e. f(x_i), for all test observations. Number of rows equals number of test observations. Number of columns equals n.iter .

test.y_nocensoring

Matrix of test sample draws of the outcome assuming uncensored. Can take values below below_cens and above above_cens. Number of rows equals number of test observations. Number of columns equals n.iter .

test.y_withcensoring

Matrix of test sample draws of the outcome assuming censored. Cannot take values below below_cens and above above_cens. Number of rows equals number of test observations. Number of columns equals n.iter .

test.probcensbelow

Matrix of draws of probabilities of test sample observations being censored from below. Number of rows equals number of test observations. Number of columns equals n.iter .

test.probcensabove

Matrix of draws of probabilities of test sample observations being censored from above. Number of rows equals number of test observations. Number of columns equals n.iter .

sigma

Vector of draws of the standard deviation of the error term. Number of elements equals n.iter .

Examples


#example taken from https://stats.idre.ucla.edu/r/dae/tobit-models/

dat <- read.csv("https://stats.idre.ucla.edu/stat/data/tobit.csv")

train_inds <- sample(1:200,190)
test_inds <- (1:200)[-train_inds]

ytrain <- dat$apt[train_inds]
ytest <- dat$apt[test_inds]

xtrain <- cbind(dat$read, dat$math)[train_inds,]
xtest <- cbind(dat$read, dat$math)[test_inds,]

tobart_res <- tbart1(xtrain,xtest,ytrain,
                    below_cens = -Inf,
                    above_cens = 800,
                    n.iter = 400,
                    n.burnin = 100)


EoghanONeill/TobitBART documentation built on Feb. 6, 2025, 6:52 a.m.