class sklearn.linear_model.SGDRegressor(loss=’squared_loss’, penalty=’l2’, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate=’invscaling’, eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False, n_iter=None)
[source]
Linear model fitted by minimizing a regularized empirical loss with SGD
SGD stands for Stochastic Gradient Descent: the gradient of the loss is estimated each sample at a time and the model is updated along the way with a decreasing strength schedule (aka learning rate).
The regularizer is a penalty added to the loss function that shrinks model parameters towards the zero vector using either the squared euclidean norm L2 or the absolute norm L1 or a combination of both (Elastic Net). If the parameter update crosses the 0.0 value because of the regularizer, the update is truncated to 0.0 to allow for learning sparse models and achieve online feature selection.
This implementation works with data represented as dense numpy arrays of floating point values for the features.
Read more in the User Guide.
Parameters: |
|
---|---|
Attributes: |
|
See also
>>> import numpy as np >>> from sklearn import linear_model >>> n_samples, n_features = 10, 5 >>> np.random.seed(0) >>> y = np.random.randn(n_samples) >>> X = np.random.randn(n_samples, n_features) >>> clf = linear_model.SGDRegressor(max_iter=1000) >>> clf.fit(X, y) ... SGDRegressor(alpha=0.0001, average=False, early_stopping=False, epsilon=0.1, eta0=0.01, fit_intercept=True, l1_ratio=0.15, learning_rate='invscaling', loss='squared_loss', max_iter=1000, n_iter=None, n_iter_no_change=5, penalty='l2', power_t=0.25, random_state=None, shuffle=True, tol=None, validation_fraction=0.1, verbose=0, warm_start=False)
densify () | Convert coefficient matrix to dense array format. |
fit (X, y[, coef_init, intercept_init, …]) | Fit linear model with Stochastic Gradient Descent. |
get_params ([deep]) | Get parameters for this estimator. |
partial_fit (X, y[, sample_weight]) | Fit linear model with Stochastic Gradient Descent. |
predict (X) | Predict using the linear model |
score (X, y[, sample_weight]) | Returns the coefficient of determination R^2 of the prediction. |
set_params (*args, **kwargs) | |
sparsify () | Convert coefficient matrix to sparse format. |
__init__(loss=’squared_loss’, penalty=’l2’, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate=’invscaling’, eta0=0.01, power_t=0.25, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False, n_iter=None)
[source]
densify()
[source]
Convert coefficient matrix to dense array format.
Converts the coef_
member (back) to a numpy.ndarray. This is the default format of coef_
and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns: |
|
---|
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None)
[source]
Fit linear model with Stochastic Gradient Descent.
Parameters: |
|
---|---|
Returns: |
|
get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: |
|
---|---|
Returns: |
|
partial_fit(X, y, sample_weight=None)
[source]
Fit linear model with Stochastic Gradient Descent.
Parameters: |
|
---|---|
Returns: |
|
predict(X)
[source]
Predict using the linear model
Parameters: |
|
---|---|
Returns: |
|
score(X, y, sample_weight=None)
[source]
Returns the coefficient of determination R^2 of the prediction.
The coefficient R^2 is defined as (1 - u/v), where u is the residual sum of squares ((y_true - y_pred) ** 2).sum() and v is the total sum of squares ((y_true - y_true.mean()) ** 2).sum(). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
Parameters: |
|
---|---|
Returns: |
|
sparsify()
[source]
Convert coefficient matrix to sparse format.
Converts the coef_
member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The intercept_
member is not converted.
Returns: |
|
---|
For non-sparse models, i.e. when there are not many zeros in coef_
, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum()
, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.linear_model.SGDRegressor
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html