Shiny dashboard "Statistical foundations of machine learning"

Regression and model selection

Goal: show the impact of different kinds of hyper-parameters (degree of polynomial model, number of neighbors in locally constant and locally linear fitting, number of trees in Random Forest) on the bias, variance and generalization error.

Common left panel:

Polynomial fitting

Top slider:

Center left: regression function (in green), sampling distribution of polynomial prediction and value $x$. Title reports the average squared bias (B2), variance, MSE, empirical MSE and FPE criterion Center right: sampling distribution of the prediction for the input $x$

Bottom left: Squared bias vs. polynomial degree

Bottom center: Variance vs. polynomial degree

Bottom right: MSE vs. polynomial degree. The title shows the polynomial order $p$ associated to the minimum MSE and the corresponding minimum MSE.

Suggested manipulations:

Local constant fitting

Top slider:

Bottom left: regression function (in green), sampling distribution of polynomial prediction and value $x$. Title reports the average squared bias (B2), variance, MSE, empirical MSE and FPE criterion.

Bottom center: sampling distribution of the prediction for the input $x$

Suggested manipulations:

Local linear fitting

Top slider:

Bottom left: regression function (in green), sampling distribution of polynomial prediction and value $x$. Title reports the average squared bias (B2), variance, MSE, empirical MSE and FPE criterion.

Bottom center: sampling distribution of the prediction for the input $x$

Suggested manipulations:

Random forest fitting

Top slider:

Bottom left: regression function (in green), sampling distribution of polynomial prediction and value $x$. Title reports the average squared bias (B2), variance, MSE, empirical MSE and FPE criterion.

Bottom center: sampling distribution of the prediction for the input $x$

Suggested manipulations:



gbonte/gbcode documentation built on Feb. 27, 2024, 7:38 a.m.