DidacticBoost: A Simple Implementation and Demonstration of Gradient Boosting
A basic, clear implementation of tree-based gradient boosting
    designed to illustrate the core operation of boosting models. Tuning
    parameters (such as stochastic subsampling, modified learning rate, or
    regularization) are not implemented. The only adjustable parameter is the
    number of training rounds. If you are looking for a high performance boosting
    implementation with tuning parameters, consider the 'xgboost' package.
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=DidacticBoost
to link to this page.