Elastic Net model with iterative fitting along a regularization path.
See glossary entry for cross-validation estimator.
Read more in the User Guide.
Float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a combination of L1 and L2 This parameter can be a list, in which case the different values are tested by cross-validation and the one giving the best prediction score is used. Note that a good choice of list of values for l1_ratio is often to put more values close to 1 (i.e. Lasso) and less close to 0 (i.e. Ridge), as in [.1, .5, .7,
.9, .95, .99, 1].
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
Number of alphas along the regularization path, used for each l1_ratio.
List of alphas where to compute the models. If None alphas are set automatically.
Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered).
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
The maximum number of iterations.
The tolerance for the optimization: if the updates are smaller than tol, the optimization code checks the dual gap for optimality and continues until it is smaller than tol.
Determines the cross-validation splitting strategy. Possible inputs for cv are:
For int/None inputs, KFold is used.
Refer User Guide for the various cross-validation strategies that can be used here.
Changed in version 0.22: cv default value if None changed from 3-fold to 5-fold.
If True, X will be copied; else, it may be overwritten.
Amount of verbosity.
Number of CPUs to use during the cross validation. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
When set to True, forces the coefficients to be positive.
The seed of the pseudo random number generator that selects a random feature to update. Used when selection == ‘random’. Pass an int for reproducible output across multiple function calls. See Glossary.
If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
The amount of penalization chosen by cross validation.
The compromise between l1 and l2 penalization chosen by cross validation.
Parameter vector (w in the cost function formula).
Independent term in the decision function.
Mean square error for the test set on each fold, varying l1_ratio and alpha.
The grid of alphas used for fitting, for each l1_ratio.
The dual gaps at the end of the optimization for the optimal alpha.
Number of iterations run by the coordinate descent solver to reach the specified tolerance for the optimal alpha.
Number of features seen during fit.
Added in version 0.24.
n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
Added in version 1.0.
See also
enet_pathCompute elastic net path with coordinate descent.
ElasticNetLinear regression with combined L1 and L2 priors as regularizer.
In fit, once the best parameters l1_ratio and alpha are found through cross-validation, the model is fit again using the entire training set.
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array.
The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. More specifically, the optimization objective is:
1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to:
a * L1 + b * L2
for:
alpha = a + b and l1_ratio = a / (a + b).
For an example, see examples/linear_model/plot_lasso_model_selection.py.
>>> from sklearn.linear_model import ElasticNetCV >>> from sklearn.datasets import make_regression
>>> X, y = make_regression(n_features=2, random_state=0) >>> regr = ElasticNetCV(cv=5, random_state=0) >>> regr.fit(X, y) ElasticNetCV(cv=5, random_state=0) >>> print(regr.alpha_) 0.199... >>> print(regr.intercept_) 0.398... >>> print(regr.predict([[0, 0]])) [0.398...]
Fit ElasticNet model with coordinate descent.
Fit is on grid of alphas and best alpha estimated by cross-validation.
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. Note that large sparse matrices and arrays requiring int64 indices are not accepted.
Target values.
Sample weights used for fitting and evaluation of the weighted mean squared error of each cv-fold. Note that the cross validated MSE that is finally used to find the best model is the unweighted mean over the (weighted) MSEs of each test fold.
Parameters to be passed to the CV splitter.
Added in version 1.4: Only available if enable_metadata_routing=True, which can be set by using sklearn.set_config(enable_metadata_routing=True). See Metadata Routing User Guide for more details.
Returns an instance of fitted model.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Added in version 1.4.
A MetadataRouter encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Compute elastic net path with coordinate descent.
The elastic net optimization function varies for mono and multi-outputs.
For mono-output tasks it is:
1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2
For multi-output tasks it is:
(1 / (2 * n_samples)) * ||Y - XW||_Fro^2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2
Where:
||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2}
i.e. the sum of norm of each row.
Read more in the User Guide.
Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output then X can be sparse.
Target values.
Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). l1_ratio=1 corresponds to the Lasso.
Length of the path. eps=1e-3 means that alpha_min / alpha_max = 1e-3.
Number of alphas along the regularization path.
List of alphas where to compute the models. If None alphas are set automatically.
Whether to use a precomputed Gram matrix to speed up calculations. If set to 'auto' let us decide. The Gram matrix can also be passed as argument.
Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed.
If True, X will be copied; else, it may be overwritten.
The initial values of the coefficients.
Amount of verbosity.
Whether to return the number of iterations or not.
If set to True, forces coefficients to be positive. (Only allowed when y.ndim == 1).
If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller.
Keyword arguments passed to the coordinate descent solver.
The alphas along the path where models are computed.
Coefficients along the path.
The dual gaps at the end of the optimization for each alpha.
The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when return_n_iter is set to True).
See also
MultiTaskElasticNetMulti-task ElasticNet model trained with L1/L2 mixed-norm as regularizer.
MultiTaskElasticNetCVMulti-task L1/L2 ElasticNet with built-in cross-validation.
ElasticNetLinear regression with combined L1 and L2 priors as regularizer.
ElasticNetCVElastic Net model with iterative fitting along a regularization path.
For an example, see examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py.
>>> from sklearn.linear_model import enet_path
>>> from sklearn.datasets import make_regression
>>> X, y, true_coef = make_regression(
... n_samples=100, n_features=5, n_informative=2, coef=True, random_state=0
... )
>>> true_coef
array([ 0. , 0. , 0. , 97.9..., 45.7...])
>>> alphas, estimated_coef, _ = enet_path(X, y, n_alphas=3)
>>> alphas.shape
(3,)
>>> estimated_coef
array([[ 0. , 0.78..., 0.56...],
[ 0. , 1.12..., 0.61...],
[-0. , -2.12..., -1.12...],
[ 0. , 23.04..., 88.93...],
[ 0. , 10.63..., 41.56...]])
Predict using the linear model.
Samples.
Returns predicted values.
Return the coefficient of determination of the prediction.
The coefficient of determination \(R^2\) is defined as \((1 - \frac{u}{v})\), where \(u\) is the residual sum of squares ((y_true - y_pred)** 2).sum() and \(v\) is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a \(R^2\) score of 0.0.
Test samples. For some estimators this may be a precomputed kernel matrix or a list of generic objects instead with shape (n_samples, n_samples_fitted), where n_samples_fitted is the number of samples used in the fitting for the estimator.
True values for X.
Sample weights.
\(R^2\) of self.predict(X) w.r.t. y.
The \(R^2\) score used when calling score on a regressor uses multioutput='uniform_average' from version 0.23 to keep consistent with default value of r2_score. This influences the score method of all the multioutput regressors (except for MultiOutputRegressor).
Request metadata passed to the fit method.
Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True: metadata is requested, and passed to fit if provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it to fit.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.
Metadata routing for sample_weight parameter in fit.
The updated object.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Request metadata passed to the score method.
Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it to score.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.
Metadata routing for sample_weight parameter in score.
The updated object.
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.linear_model.ElasticNetCV.html