class sklearn.svm.NuSVC(nu=0.5, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=1, decision_function_shape=’ovr’, random_state=None)
[source]
NuSupport Vector Classification.
Similar to SVC but uses a parameter to control the number of support vectors.
The implementation is based on libsvm.
Read more in the User Guide.
Parameters: 


Attributes: 

See also
>>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [1, 1], [2, 1]]) >>> y = np.array([1, 1, 2, 2]) >>> from sklearn.svm import NuSVC >>> clf = NuSVC(gamma='scale') >>> clf.fit(X, y) NuSVC(cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf', max_iter=1, nu=0.5, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) >>> print(clf.predict([[0.8, 1]])) [1]
decision_function (X)  Distance of the samples X to the separating hyperplane. 
fit (X, y[, sample_weight])  Fit the SVM model according to the given training data. 
get_params ([deep])  Get parameters for this estimator. 
predict (X)  Perform classification on samples in X. 
score (X, y[, sample_weight])  Returns the mean accuracy on the given test data and labels. 
set_params (**params)  Set the parameters of this estimator. 
__init__(nu=0.5, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=1, decision_function_shape=’ovr’, random_state=None)
[source]
decision_function(X)
[source]
Distance of the samples X to the separating hyperplane.
Parameters: 


Returns: 

fit(X, y, sample_weight=None)
[source]
Fit the SVM model according to the given training data.
Parameters: 


Returns: 

If X and y are not Cordered and contiguous arrays of np.float64 and X is not a scipy.sparse.csr_matrix, X and/or y may be copied.
If X is a dense array, then the other methods will not support sparse matrices as input.
get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: 


Returns: 

predict(X)
[source]
Perform classification on samples in X.
For an oneclass model, +1 or 1 is returned.
Parameters: 


Returns: 

predict_log_proba
Compute log probabilities of possible outcomes for samples in X.
The model need to have probability information computed at training time: fit with attribute probability
set to True.
Parameters: 


Returns: 

The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
predict_proba
Compute probabilities of possible outcomes for samples in X.
The model need to have probability information computed at training time: fit with attribute probability
set to True.
Parameters: 


Returns: 

The probability model is created using cross validation, so the results can be slightly different than those obtained by predict. Also, it will produce meaningless results on very small datasets.
score(X, y, sample_weight=None)
[source]
Returns the mean accuracy on the given test data and labels.
In multilabel classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters: 


Returns: 

set_params(**params)
[source]
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Returns: 


sklearn.svm.NuSVC
© 2007–2018 The scikitlearn developers
Licensed under the 3clause BSD License.
http://scikitlearn.org/stable/modules/generated/sklearn.svm.NuSVC.html