Description Usage Arguments Format
Linear least squares with l2 regularization. This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. Also known as Ridge Regression or Tikhonov regularization. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets). Read more in the User Guide.
1 2 3 4 5 |
x |
matrix. Training Data |
y |
matrix. Target Values |
alpha |
float, array-like, shape (n_targets)
Regularization strength; must be a positive float. Regularization
improves the conditioning of the problem and reduces the variance of
the estimates. Larger values specify stronger regularization.
Alpha corresponds to |
fit_intercept |
boolean Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (e.g. data is expected to be already centered). |
normalize |
boolean, optional, default False
If True, the regressors X will be normalized before regression.
This parameter is ignored when |
copy_X |
boolean, optional, default True If True, X will be copied; else, it may be overwritten. |
max_iter |
int, optional Maximum number of iterations for conjugate gradient solver. For 'sparse_cg' and 'lsqr' solvers, the default value is determined by scipy.sparse.linalg. For 'sag' solver, the default value is 1000. |
tol |
float Precision of the solution. |
solver |
'auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag' Solver to use in the computational routines:
|
random_state |
int seed, RandomState instance, or None (default) The seed of the pseudo random number generator to use when shuffling the data. Used only in 'sag' solver. |
An object of class R6ClassGenerator
of length 24.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.