knitr::opts_chunk$set(cache = TRUE, error = TRUE, fig.width = 6, fig.asp = 1)

Installing R Package Dependencies

growthcurveSL R package for SuperLearning of child growth trajectories is dependent on a number of open-source R packages. Some of these packages are not on CRAN and need to be installed directly from their respective github repositories. Below we provide the code for installing required R packages.

We start by installing xgboost and h2o machine learning toolkits, both of which will be used by our SuperLearner.

# installing xgboost:
install.packages('xgboost')
# installing h2o
if ("package:h2o" %in% search()) detach("package:h2o", unload=TRUE)
if ("h2o" %in% rownames(installed.packages())) remove.packages("h2o")
pkgs <- c("methods","statmod","stats","graphics","RCurl","jsonlite","tools","utils")
new.pkgs <- setdiff(pkgs, rownames(installed.packages()))
if (length(new.pkgs)) install.packages(new.pkgs)
install.packages("h2o", type="source", repos=(c("http://h2o-release.s3.amazonaws.com/h2o/rel-tutte/2/R")))

Next, one needs to install brokenstick and hbgd R packages. We also need to install trellisecopejs R package, which will be used for visualization of the imputed growth trajectories.

options(repos = c(
  CRAN = "http://cran.rstudio.com/",
  tessera = "http://packages.tessera.io"))
install.packages("brokenstick")
devtools::install_github('hafen/hbgd', ref = "tidy")
devtools::install_github("hafen/trelliscopejs")

Finally, we need to install the packages gridisl and growthcurveSL, which implement the actual SuperLearner.

devtools::install_github('osofr/gridisl', build_vignettes = FALSE)
devtools::install_github('osofr/growthcurveSL', build_vignettes = FALSE)

Loading R Packages and CPP Dataset

library("magrittr")
library("dplyr")
library("tibble")
library("data.table")
library("purrr")
library("h2o")
library("xgboost")
library("gridisl")
library("growthcurveSL")
options(growthcurveSL.verbose = FALSE)
options(gridisl.verbose = FALSE)
options(width = 100)
options(tibble.print_max = 50, tibble.width = 200)
data(cpp)
cpp <- cpp[!is.na(cpp[, "haz"]), ]
covars <- c("apgar1", "apgar5", "parity", "gagebrth", "mage", "meducyrs", "sexn")

We also add an indicator of the random holdout growth measurement for each subject. These holdout growth measurements will be used to assess the quality of model fits in our holdout SuperLearner (as described later).

cpp_holdout <- add_holdout_ind(data = cpp, ID = "subjid", hold_column = "hold", random = TRUE, seed = 54321)

Similarly, we define the column with the indicators of random validation folds (in this example we define 5 folds in total). Each validation fold represents approximately a 5th of all subjects. The cross-validated SuperLearner (CV SuperLearner) will then use each of these validation fold for model comparison (as described later).

cpp_folds <- add_CVfolds_ind(cpp, ID = "subjid", nfolds = 5, seed = 23)

Fitting Growth Trajectories with Random Holdout SuperLearner

We start by defining the grid of hyper parameters for h2o GBM and xgboost.

h2o_GBM_hyper <- list(
  ntrees = c(20, 50, 100),
  learn_rate = c(.05, .1, .2),
  max_depth = c(3, 6, 10, 15),
  sample_rate = c(.5, .75, .9, 1),
  col_sample_rate = seq(0.5, 1),
  col_sample_rate_per_tree = c(.3, .4, .8, 1)
)

xgb_GBM_hyper = list(
  nrounds = c(20, 50, 100),
  learning_rate = c(.05, .1, .2),
  max_depth = c(3, 6, 10, 15),
  subsample = c(.5, .75, .9, 1),
  colsample_bytree = c(.3, .4, .8, 1),
  min_child_weight = c(1, 5, 7),
  gamma = c(.0, .05, seq(.1, .9, by=.2), 1),
  lambda = c(.1, .5, 1, 2, 5),
  alpha = c(0, .1, .5, .8, 1)
  )

Below, we define an ensemble of learners (SuperLearner) with h2o, xgboost and brokenstick using the novel gridisl R package syntax. Note that any number of learners can be added with "+" syntax. By setting strategy = "RandomDiscrete", we specify that the models should be selected at random from the above defined grids of model parameters. By setting max_models = 10, we specify that at most 10 such models should be considered for h2o grid (similarly, for xgboost, where we consider 30 randomly drawn model parameters). Note that for best results max_models should be set substantially higher than 10 or 30 (computational resources permitting). By setting max_models to higher values we can explore a larger space of tuning parameters, increasing the chance that we actually find the best performing model in the grid (i.e., finding the most generalizable model). As part of our ensemble we also include the brokenstick model (added as the first model).

grid_holdSL <-
  defModel(estimator = "brokenstick__brokenstick", predict.w.Y = FALSE) +

  defModel(estimator = "h2o__gbm", family = "gaussian",
           search_criteria = list(strategy = "RandomDiscrete", max_models = 10),
           param_grid = h2o_GBM_hyper,
           stopping_rounds = 4, stopping_metric = "MSE",
           seed = 123456) +

  defModel(estimator = "xgboost__gbm", family = "gaussian",
           search_criteria = list(strategy = "RandomDiscrete", max_models = 30),
           param_grid = xgb_GBM_hyper,
           early_stopping_rounds = 4,
           seed = 123456)

Prior to training the model with the SuperLearner, we need to initialize the h2o cluster. This step is necessarily for modeling with h2o machine learning toolkit.

h2o::h2o.init(nthreads = -1)

Below, we fit the discrete holdout SuperLearner using random holdout observations for selecting best model. The holdout SuperLearner is enabled by specifying the argument method = "holdout". By setting use_new_features=TRUE, we allow the fitting procedures to use additional summaries of growth trajectories as predictors.

We also obtain the imputed growth trajectories by calling the function predict_all. The resulting dataset consist of a single row per subject. The column "fit" will contain the subject specific predictions of the growth trajectories. Finally, by calling the function convert_to_hbgd we convert the imputed growth trajectories into an object that can be visualized with trelliscopejs R package, as we show next.

 mfit_holdSL <- fit_growth(grid_holdSL,
                           ID = "subjid",
                           t_name = "agedays",
                           x = c("agedays", covars),
                           y = "haz",
                           data = cpp_holdout,
                           hold_column = "hold",
                           method = "holdout",
                           use_new_features = TRUE)

 all_preds_holdSL <- predict_all(mfit_holdSL, cpp_holdout) %>%
                     convert_to_hbgd(cpp_holdout, "sex", "holdSuperLearner")

Here is a quick example of the dataset containing the imputed growth trajectories:

all_preds_holdSL

Visualizing Imputed Growth Trajectories

One can create the individual growth trajectory plots and visualize those with trelliscopejs R package. These imputed growth curves (black lines and black dot) are presented in an interactive trelliscope penel, as shown below. The panel also includes the model predictions for each holdout observation (red circle), which can be used to visually inspect the quality of the model fit. Finally, the panel contains the original growth measurements observed on each subject, depicted as a wide gray circle. Here is an example of the trelliscope panel for the above holdout SuperLearner fit.

all_preds_holdSL %>%
  hbgd::add_trajectory_plot() %>%
  dplyr::select_("subjid", "panel") %>%
  trelliscopejs::trelliscope(name = "holdSuperLearner", self_contained = TRUE)

Fitting Growth Trajectories with Cross-Validation SuperLearner

In this section we show how to model the growth trajectories with the novel cross-validation SuperLearner (CV SuperLearner). The CV SuperLearner is enabled by calling the function fit_growth with argument method = "cv". Note that in this case the novel cross-validation routine will utilize all of the subject growth measurements for selecting the best model. That is, every single growth measurement on each child is used as a validation data point for assessing the performance of each model in the ensemble. This is in contrast to the previously shown random holdout SuperLearner (method = "holdout"), which selects the best model based on single (and random) holdout growth measurement on each subject.

Note that it is currently not possible to include brokenstick model as a part this cross-validated SuperLearner ensemble. This is due to the restriction that each included model must be able to make predictions for new subjects (i.e., validation subjects that were not used for fitting the original model).

As in the previous section, we also create a single data set that will contain the imputed growth trajectories on each subject. This is accomplished by calling the functions predict_all and convert_to_hbgd.

grid_cvSL <-
  defModel(estimator = "h2o__gbm", family = "gaussian",
           search_criteria = list(strategy = "RandomDiscrete", max_models = 10),
           param_grid = h2o_GBM_hyper,
           seed = 123456) +

  defModel(estimator = "xgboost__gbm", family = "gaussian",
           search_criteria = list(strategy = "RandomDiscrete", max_models = 30),
           param_grid = xgb_GBM_hyper,
           seed = 123456)

mfit_SLcv <- fit_growth(grid_cvSL,
                       data = cpp_folds,
                       method = "cv",
                       ID = "subjid",
                       t_name = "agedays",
                       x = c("agedays", covars),
                       y = "haz",
                       fold_column = "fold",
                       use_new_features = TRUE)

all_preds_cvSL <- predict_all(mfit_SLcv, cpp_holdout) %>%
                  convert_to_hbgd(cpp_holdout, "sex", "cvSuperLearner")

Visualizing Imputed Growth Trajectories

As before, we can visualize the CV SuperLearner imputed subject-specific growth trajectories with trelliscopejs R package. These plots will include the predictions for all holdout observations, which can be used to visually inspect the quality of the model fit (the output of the trelliscopejs is not shown here).

all_preds_cvSL %>%
  hbgd::add_trajectory_plot() %>%
  dplyr::select_("subjid", "panel") %>%
  trelliscopejs::trelliscope(name = "cvSuperLearner", self_contained = TRUE)

Assessing Model Performance

One can use the function make_model_report to generate an in-depth summary of individual models used by the SuperLearner. This report can be generated in either html, pdf or word formats. In particular, the report will contain the assessment of the performance of each model used in the SuperLearner ensemble, including a plot of validation mean-squared-errors (CV-MSE). For example, for the above random holdout SuperLearner, the plot that describes the performance of each model is shown below (lower CV-MSE implies a better model fit). Note that this plot also contains the 95\% confidence intervals (CIs) around each estimated CV-MSE.

make_model_report(mfit_holdSL, K = 10, data = cpp_folds,
                   title = paste0("Performance of the holdout SuperLearner for Growth Curve Trajectories with CPP Data"),
                   format = "html", keep_md = FALSE,
                   openFile = TRUE)
plotMSEs(mfit_holdSL, K = 10, interactive = TRUE)

Similarly, the model performance report for CV SuperLearner will contain the plot comparing the cross-validated mean-squared-error (CV-MSE) for each individual model, along with the corresponding 95\% CIs, as shown below.

make_model_report(mfit_SLcv, K = 10, data = cpp_folds,
                   title = paste0("Performance of CV SuperLearner for Growth Curve Trajectories with CPP Data"),
                   format = "html", keep_md = FALSE,
                   openFile = TRUE)
plotMSEs(mfit_SLcv, K = 10, interactive = TRUE)


osofr/growthcurveSL documentation built on May 24, 2019, 4:56 p.m.