class sklearn.linear_model.Perceptron(penalty=None, alpha=0.0001, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, n_iter=None)
[source]
Read more in the User Guide.
Parameters: |
|
---|---|
Attributes: |
|
See also
Perceptron
is a classification algorithm which shares the same underlying implementation with SGDClassifier
. In fact, Perceptron()
is equivalent to SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None)
.
https://en.wikipedia.org/wiki/Perceptron and references therein.
>>> from sklearn.datasets import load_digits >>> from sklearn.linear_model import Perceptron >>> X, y = load_digits(return_X_y=True) >>> clf = Perceptron(tol=1e-3, random_state=0) >>> clf.fit(X, y) Perceptron(alpha=0.0001, class_weight=None, early_stopping=False, eta0=1.0, fit_intercept=True, max_iter=None, n_iter=None, n_iter_no_change=5, n_jobs=None, penalty=None, random_state=0, shuffle=True, tol=0.001, validation_fraction=0.1, verbose=0, warm_start=False) >>> clf.score(X, y) 0.946...
decision_function (X) | Predict confidence scores for samples. |
densify () | Convert coefficient matrix to dense array format. |
fit (X, y[, coef_init, intercept_init, …]) | Fit linear model with Stochastic Gradient Descent. |
get_params ([deep]) | Get parameters for this estimator. |
partial_fit (X, y[, classes, sample_weight]) | Fit linear model with Stochastic Gradient Descent. |
predict (X) | Predict class labels for samples in X. |
score (X, y[, sample_weight]) | Returns the mean accuracy on the given test data and labels. |
set_params (*args, **kwargs) | |
sparsify () | Convert coefficient matrix to sparse format. |
__init__(penalty=None, alpha=0.0001, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, eta0=1.0, n_jobs=None, random_state=0, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, class_weight=None, warm_start=False, n_iter=None)
[source]
decision_function(X)
[source]
Predict confidence scores for samples.
The confidence score for a sample is the signed distance of that sample to the hyperplane.
Parameters: |
|
---|---|
Returns: |
|
densify()
[source]
Convert coefficient matrix to dense array format.
Converts the coef_
member (back) to a numpy.ndarray. This is the default format of coef_
and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op.
Returns: |
|
---|
fit(X, y, coef_init=None, intercept_init=None, sample_weight=None)
[source]
Fit linear model with Stochastic Gradient Descent.
Parameters: |
|
---|---|
Returns: |
|
get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: |
|
---|---|
Returns: |
|
loss_function
DEPRECATED: Attribute loss_function was deprecated in version 0.19 and will be removed in 0.21. Use loss_function_
instead
partial_fit(X, y, classes=None, sample_weight=None)
[source]
Fit linear model with Stochastic Gradient Descent.
Parameters: |
|
---|---|
Returns: |
|
predict(X)
[source]
Predict class labels for samples in X.
Parameters: |
|
---|---|
Returns: |
|
score(X, y, sample_weight=None)
[source]
Returns the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters: |
|
---|---|
Returns: |
|
sparsify()
[source]
Convert coefficient matrix to sparse format.
Converts the coef_
member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation.
The intercept_
member is not converted.
Returns: |
|
---|
For non-sparse models, i.e. when there are not many zeros in coef_
, this may actually increase memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with (coef_ == 0).sum()
, must be more than 50% for this to provide significant benefits.
After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sklearn.linear_model.Perceptron
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Perceptron.html