DidacticBoost: A Simple Implementation and Demonstration of Gradient Boosting

A basic, clear implementation of tree-based gradient boosting designed to illustrate the core operation of boosting models. Tuning parameters (such as stochastic subsampling, modified learning rate, or regularization) are not implemented. The only adjustable parameter is the number of training rounds. If you are looking for a high performance boosting implementation with tuning parameters, consider the 'xgboost' package.

AuthorDavid Shaub [aut, cre]
Date of publication2016-04-19 08:11:59
MaintainerDavid Shaub <davidshaub@gmx.com>
LicenseGPL-3
Version0.1.1
https://github.com/dashaub/DidacticBoost

View on CRAN

Files

DidacticBoost
DidacticBoost/tests
DidacticBoost/tests/testthat.R
DidacticBoost/tests/testthat
DidacticBoost/tests/testthat/test-Main.R
DidacticBoost/NAMESPACE
DidacticBoost/R
DidacticBoost/R/Main.R
DidacticBoost/MD5
DidacticBoost/DESCRIPTION
DidacticBoost/man
DidacticBoost/man/fitBoosted.Rd DidacticBoost/man/predict.boosted.Rd DidacticBoost/man/is.boosted.Rd

Questions? Problems? Suggestions? or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.