/scikit-learn

# 3.2.4.1.5. sklearn.linear_model.LogisticRegressionCV

`class sklearn.linear_model.LogisticRegressionCV(Cs=10, fit_intercept=True, cv=’warn’, dual=False, penalty=’l2’, scoring=None, solver=’lbfgs’, tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class=’warn’, random_state=None)` [source]

Logistic Regression CV (aka logit, MaxEnt) classifier.

This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty.

For the grid of Cs values (that are set by default to be ten values in a logarithmic scale between 1e-4 and 1e4), the best hyperparameter is selected by the cross-validator StratifiedKFold, but it can be changed using the cv parameter. In the case of newton-cg and lbfgs solvers, we warm start along the path i.e guess the initial coefficients of the present fit to be the coefficients got after convergence in the previous fit, so it is supposed to be faster for high-dimensional dense data.

For a multiclass problem, the hyperparameters for each class are computed using the best scores got by doing a one-vs-rest in parallel across all folds and classes. Hence this is not the true multinomial loss.

Read more in the User Guide.

Parameters: `Cs : list of floats | int` Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. Like in support vector machines, smaller values specify stronger regularization. `fit_intercept : bool, default: True` Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function. `cv : integer or cross-validation generator, default: None` The default cross-validation generator used is Stratified K-Folds. If an integer is provided, then it is the number of folds used. See the module `sklearn.model_selection` module for the list of possible cross-validation objects. Changed in version 0.20: `cv` default value if None will change from 3-fold to 5-fold in v0.22. `dual : bool` Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. `penalty : str, ‘l1’ or ‘l2’` Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties. `scoring : string, callable, or None` A string (see model evaluation documentation) or a scorer callable object / function with signature `scorer(estimator, X, y)`. For a list of scoring functions that can be used, look at `sklearn.metrics`. The default scoring option used is ‘accuracy’. `solver : str, {‘newton-cg’, ‘lbfgs’, ‘liblinear’, ‘sag’, ‘saga’}, default: ‘lbfgs’.` Algorithm to use in the optimization problem. For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones. For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss; ‘liblinear’ is limited to one-versus-rest schemes. ‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas ‘liblinear’ and ‘saga’ handle L1 penalty. ‘liblinear’ might be slower in LogisticRegressionCV because it does not handle warm-starting. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. New in version 0.17: Stochastic Average Gradient descent solver. New in version 0.19: SAGA solver. `tol : float, optional` Tolerance for stopping criteria. `max_iter : int, optional` Maximum number of iterations of the optimization algorithm. `class_weight : dict or ‘balanced’, optional` Weights associated with classes in the form `{class_label: weight}`. If not given, all classes are supposed to have weight one. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as `n_samples / (n_classes * np.bincount(y))`. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. New in version 0.17: class_weight == ‘balanced’ `n_jobs : int or None, optional (default=None)` Number of CPU cores used during the cross-validation loop. `None` means 1 unless in a `joblib.parallel_backend` context. `-1` means using all processors. See Glossary for more details. `verbose : int` For the ‘liblinear’, ‘sag’ and ‘lbfgs’ solvers set verbose to any positive number for verbosity. `refit : bool` If set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters. Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averaged. `intercept_scaling : float, default 1.` Useful only when the solver ‘liblinear’ is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes `intercept_scaling * synthetic_feature_weight`. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. `multi_class : str, {‘ovr’, ‘multinomial’, ‘auto’}, default: ‘ovr’` If the option chosen is ‘ovr’, then a binary problem is fit for each label. For ‘multinomial’ the loss minimised is the multinomial loss fit across the entire probability distribution, even when the data is binary. ‘multinomial’ is unavailable when solver=’liblinear’. ‘auto’ selects ‘ovr’ if the data is binary, or if solver=’liblinear’, and otherwise selects ‘multinomial’. New in version 0.18: Stochastic Average Gradient descent solver for ‘multinomial’ case. Changed in version 0.20: Default will change from ‘ovr’ to ‘auto’ in 0.22. `random_state : int, RandomState instance or None, optional, default None` If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. `coef_ : array, shape (1, n_features) or (n_classes, n_features)` Coefficient of the features in the decision function. `coef_` is of shape (1, n_features) when the given problem is binary. `intercept_ : array, shape (1,) or (n_classes,)` Intercept (a.k.a. bias) added to the decision function. If `fit_intercept` is set to False, the intercept is set to zero. `intercept_` is of shape(1,) when the problem is binary. `Cs_ : array` Array of C i.e. inverse of regularization parameter values used for cross-validation. `coefs_paths_ : array, shape (n_folds, len(Cs_), n_features) or (n_folds, len(Cs_), n_features + 1)` dict with classes as the keys, and the path of coefficients obtained during cross-validating across each fold and then across each Cs after doing an OvR for the corresponding class as values. If the ‘multi_class’ option is set to ‘multinomial’, then the coefs_paths are the coefficients corresponding to each class. Each dict value has shape `(n_folds, len(Cs_), n_features)` or `(n_folds, len(Cs_), n_features + 1)` depending on whether the intercept is fit or not. `scores_ : dict` dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold, after doing an OvR for the corresponding class. If the ‘multi_class’ option given is ‘multinomial’ then the same scores are repeated across all classes, since this is the multinomial class. Each dict value has shape (n_folds, len(Cs)) `C_ : array, shape (n_classes,) or (n_classes - 1,)` Array of C that maps to the best scores across every class. If refit is set to False, then for each class, the best C is the average of the C’s that correspond to the best scores for each fold. `C_` is of shape(n_classes,) when the problem is binary. `n_iter_ : array, shape (n_classes, n_folds, n_cs) or (1, n_folds, n_cs)` Actual number of iterations for all classes, folds and Cs. In the binary or multinomial cases, the first dimension is equal to 1.

#### Examples

```>>> from sklearn.datasets import load_iris
>>> from sklearn.linear_model import LogisticRegressionCV
>>> X, y = load_iris(return_X_y=True)
>>> clf = LogisticRegressionCV(cv=5, random_state=0,
...                            multi_class='multinomial').fit(X, y)
>>> clf.predict(X[:2, :])
array([0, 0])
>>> clf.predict_proba(X[:2, :]).shape
(2, 3)
>>> clf.score(X, y)
0.98...
```

#### Methods

 `decision_function`(X) Predict confidence scores for samples. `densify`() Convert coefficient matrix to dense array format. `fit`(X, y[, sample_weight]) Fit the model according to the given training data. `get_params`([deep]) Get parameters for this estimator. `predict`(X) Predict class labels for samples in X. `predict_log_proba`(X) Log of probability estimates. `predict_proba`(X) Probability estimates. `score`(X, y[, sample_weight]) Returns the score using the `scoring` option on the given test data and labels. `set_params`(**params) Set the parameters of this estimator. `sparsify`() Convert coefficient matrix to sparse format.
`__init__(Cs=10, fit_intercept=True, cv=’warn’, dual=False, penalty=’l2’, scoring=None, solver=’lbfgs’, tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class=’warn’, random_state=None)` [source]
`decision_function(X)` [source]

Predict confidence scores for samples.

The confidence score for a sample is the signed distance of that sample to the hyperplane.

Parameters: `X : array_like or sparse matrix, shape (n_samples, n_features)` Samples. array, shape=(n_samples,) if n_classes == 2 else (n_samples, n_classes) Confidence scores per (sample, class) combination. In the binary case, confidence score for self.classes_ where >0 means this class would be predicted.
`densify()` [source]

Convert coefficient matrix to dense array format.

Converts the `coef_` member (back) to a numpy.ndarray. This is the default format of `coef_` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.

Returns: `self : estimator`
`fit(X, y, sample_weight=None)` [source]

Fit the model according to the given training data.

Parameters: `X : {array-like, sparse matrix}, shape (n_samples, n_features)` Training vector, where n_samples is the number of samples and n_features is the number of features. `y : array-like, shape (n_samples,) or (n_samples, n_targets)` Target vector relative to X. `sample_weight : array-like, shape (n_samples,) optional` Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. `self : object`
`get_params(deep=True)` [source]

Get parameters for this estimator.

Parameters: `deep : boolean, optional` If True, will return the parameters for this estimator and contained subobjects that are estimators. `params : mapping of string to any` Parameter names mapped to their values.
`predict(X)` [source]

Predict class labels for samples in X.

Parameters: `X : array_like or sparse matrix, shape (n_samples, n_features)` Samples. `C : array, shape [n_samples]` Predicted class label per sample.
`predict_log_proba(X)` [source]

Log of probability estimates.

The returned estimates for all classes are ordered by the label of classes.

Parameters: `X : array-like, shape = [n_samples, n_features]` `T : array-like, shape = [n_samples, n_classes]` Returns the log-probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
`predict_proba(X)` [source]

Probability estimates.

The returned estimates for all classes are ordered by the label of classes.

For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes.

Parameters: `X : array-like, shape = [n_samples, n_features]` `T : array-like, shape = [n_samples, n_classes]` Returns the probability of the sample for each class in the model, where classes are ordered as they are in `self.classes_`.
`score(X, y, sample_weight=None)` [source]

Returns the score using the `scoring` option on the given test data and labels.

Parameters: `X : array-like, shape = (n_samples, n_features)` Test samples. `y : array-like, shape = (n_samples,)` True labels for X. `sample_weight : array-like, shape = [n_samples], optional` Sample weights. `score : float` Score of self.predict(X) wrt. y.
`set_params(**params)` [source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.

Returns: self
`sparsify()` [source]

Convert coefficient matrix to sparse format.

Converts the `coef_` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.

The `intercept_` member is not converted.

Returns: `self : estimator`

#### Notes

For non-sparse models, i.e. when there are not many zeros in `coef_`, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with `(coef_ == 0).sum()`, must be more than 50% for this to provide significant benefits.

After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.

© 2007–2018 The scikit-learn developers