Description Usage Arguments References See Also Examples

SL wrapper for biglasso

1 2 3 | ```
SL.biglasso(Y, X, newX, family, obsWeights, penalty = "lasso",
alg.logistic = "Newton", screen = "SSR", alpha = 1, nlambda = 100,
eval.metric = "default", ncores = 1, nfolds = 5, ...)
``` |

`Y` |
Outcome variable |

`X` |
Training dataframe |

`newX` |
Test dataframe |

`family` |
Gaussian or binomial |

`obsWeights` |
Observation-level weights |

`penalty` |
The penalty to be applied to the model. Either "lasso" (default), "ridge", or "enet" (elastic net). |

`alg.logistic` |
The algorithm used in logistic regression. If "Newton" then the exact hessian is used (default); if "MM" then a majorization-minimization algorithm is used to set an upper-bound on the hessian matrix. This can be faster, particularly in data-larger-than-RAM case. |

`screen` |
"SSR" (default) is the sequential strong rule; "SEDPP" is the (sequential) EDPP rule. "SSR-BEDPP", "SSR-Dome", and "SSR-Slores" are our newly proposed screening rules which combine the strong rule with a safe rule (BEDPP, Dome test, or Slores rule). Among the three, the first two are for lasso-penalized linear regression, and the last one is for lasso-penalized logistic regression. "None" is to not apply a screening rule. |

`alpha` |
The elastic-net mixing parameter that controls the relative contribution from the lasso (l1) and the ridge (l2) penalty. |

`nlambda` |
The number of lambda values to check. Default is 100. |

`eval.metric` |
The evaluation metric for the cross-validated error and
for choosing optimal |

`ncores` |
The number of cores to use for parallel execution across a
cluster created by the |

`nfolds` |
The number of cross-validation folds. Default is 5. |

`...` |
Any additional arguments, not currently used. |

Zeng Y, Breheny P (2017). biglasso: Extending Lasso Model Fitting to Big Data. https://CRAN.R-project.org/package=biglasso.

`predict.SL.biglasso`

`biglasso`

`cv.biglasso`

`predict.biglasso`

`SL.glmnet`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | ```
data(Boston, package = "MASS")
Y = Boston$medv
# Remove outcome from covariate dataframe.
X = Boston[, -14]
set.seed(1)
# Sample rows to speed up example.
row_subset = sample(nrow(X), 30)
# Subset rows and columns & use only 2 folds to speed up example.
sl = SuperLearner(Y[row_subset], X[row_subset, 1:2, drop = FALSE],
family = gaussian(), cvControl = list(V = 2),
SL.library = "SL.biglasso")
sl
pred = predict(sl, X)
summary(pred$pred)
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.