class sklearn.linear_model.LogisticRegressionCV(Cs=10, fit_intercept=True, cv=’warn’, dual=False, penalty=’l2’, scoring=None, solver=’lbfgs’, tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class=’warn’, random_state=None)
[source]
Logistic Regression CV (aka logit, MaxEnt) classifier.
This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty.
For the grid of Cs values (that are set by default to be ten values in a logarithmic scale between 1e-4 and 1e4), the best hyperparameter is selected by the cross-validator StratifiedKFold, but it can be changed using the cv parameter. In the case of newton-cg and lbfgs solvers, we warm start along the path i.e guess the initial coefficients of the present fit to be the coefficients got after convergence in the previous fit, so it is supposed to be faster for high-dimensional dense data.
For a multiclass problem, the hyperparameters for each class are computed using the best scores got by doing a one-vs-rest in parallel across all folds and classes. Hence this is not the true multinomial loss.
Read more in the User Guide.
Parameters: |
|
---|---|
Attributes: |
|
See also
>>> from sklearn.datasets import load_iris >>> from sklearn.linear_model import LogisticRegressionCV >>> X, y = load_iris(return_X_y=True) >>> clf = LogisticRegressionCV(cv=5, random_state=0, ... multi_class='multinomial').fit(X, y) >>> clf.predict(X[:2, :]) array([0, 0]) >>> clf.predict_proba(X[:2, :]).shape (2, 3) >>> clf.score(X, y) 0.98...
decision_function (X) | Predict confidence scores for samples. |
densify () | Convert coefficient matrix to dense array format. |
fit (X, y[, sample_weight]) | Fit the model according to the given training data. |
get_params ([deep]) | Get parameters for this estimator. |
predict (X) | Predict class labels for samples in X. |
predict_log_proba (X) | Log of probability estimates. |
predict_proba (X) | Probability estimates. |
score (X, y[, sample_weight]) | Returns the score using the scoring option on the given test data and labels. |
set_params (**params) | Set the parameters of this estimator. |
sparsify () | Convert coefficient matrix to sparse format. |
__init__(Cs=10, fit_intercept=True, cv=’warn’, dual=False, penalty=’l2’, scoring=None, solver=’lbfgs’, tol=0.0001, max_iter=100, class_weight=None, n_jobs=None, verbose=0, refit=True, intercept_scaling=1.0, multi_class=’warn’, random_state=None)
[source]
decision_function(X)
[source]
Predict confidence scores for samples.
The confidence score for a sample is the signed distance of that sample to the hyperplane.
Parameters: |
|
---|---|
Returns: |
|
densify()
[source]
Convert coefficient matrix to dense array format.
Converts the coef_
member (back) to a numpy.ndarray. This is the default format of coef_
and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns: |
|
---|
fit(X, y, sample_weight=None)
[source]
Fit the model according to the given training data.
Parameters: |
|
---|---|
Returns: |
|
get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: |
|
---|---|
Returns: |
|
predict(X)
[source]
Predict class labels for samples in X.
Parameters: |
|
---|---|
Returns: |
|
predict_log_proba(X)
[source]
Log of probability estimates.
The returned estimates for all classes are ordered by the label of classes.
Parameters: |
|
---|---|
Returns: |
|
predict_proba(X)
[source]
Probability estimates.
The returned estimates for all classes are ordered by the label of classes.
For a multi_class problem, if multi_class is set to be “multinomial” the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e calculate the probability of each class assuming it to be positive using the logistic function. and normalize these values across all the classes.
Parameters: |
|
---|---|
Returns: |
|
score(X, y, sample_weight=None)
[source]
Returns the score using the scoring
option on the given test data and labels.
Parameters: |
|
---|---|
Returns: |
|
set_params(**params)
[source]
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Returns: |
|
---|
sparsify()
[source]
Convert coefficient matrix to sparse format.
Converts the coef_
member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The intercept_
member is not converted.
Returns: |
|
---|
For non-sparse models, i.e. when there are not many zeros in coef_
, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum()
, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html