Gradient Boosting

Gradient Boosting is an iterative model-training process whereby, at each step, the current model is updated with a model trained to estimate the residual between the predictions of the current model and the target labels (i.e. the error). The update is modulated by a small value called the "shrinkage", which can be thought of as a "learning rate" similar to the one used in the gradient descent algorithm. The resulting model is often referred to as a Gradient Boosting Machine, or GBM.
Related concepts:
Gradient DescentBoosting
External reference:
https://jerryfriedman.su.domains/ftp/trebst.pdf