sklearn.metrics.hinge_loss(y_true, pred_decision, labels=None, sample_weight=None)
[source]
Average hinge loss (nonregularized)
In binary class case, assuming labels in y_true are encoded with +1 and 1, when a prediction mistake is made, margin = y_true * pred_decision
is always negative (since the signs disagree), implying 1  margin
is always greater than 1. The cumulated hinge loss is therefore an upper bound of the number of mistakes made by the classifier.
In multiclass case, the function expects that either all the labels are included in y_true or an optional labels argument is provided which contains all the labels. The multilabel margin is calculated according to CrammerSinger’s method. As in the binary case, the cumulated hinge loss is an upper bound of the number of mistakes made by the classifier.
Read more in the User Guide.
Parameters: 


Returns: 

[1]  Wikipedia entry on the Hinge loss 
[2]  Koby Crammer, Yoram Singer. On the Algorithmic Implementation of Multiclass Kernelbased Vector Machines. Journal of Machine Learning Research 2, (2001), 265292 
[3]  L1 AND L2 Regularization for Multiclass Hinge Loss Models by Robert C. Moore, John DeNero. 
>>> from sklearn import svm >>> from sklearn.metrics import hinge_loss >>> X = [[0], [1]] >>> y = [1, 1] >>> est = svm.LinearSVC(random_state=0) >>> est.fit(X, y) LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=0, tol=0.0001, verbose=0) >>> pred_decision = est.decision_function([[2], [3], [0.5]]) >>> pred_decision array([2.18..., 2.36..., 0.09...]) >>> hinge_loss([1, 1, 1], pred_decision) 0.30...
In the multiclass case:
>>> X = np.array([[0], [1], [2], [3]]) >>> Y = np.array([0, 1, 2, 3]) >>> labels = np.array([0, 1, 2, 3]) >>> est = svm.LinearSVC() >>> est.fit(X, Y) LinearSVC(C=1.0, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=None, tol=0.0001, verbose=0) >>> pred_decision = est.decision_function([[1], [2], [3]]) >>> y_true = [0, 2, 3] >>> hinge_loss(y_true, pred_decision, labels) 0.56...
© 2007–2018 The scikitlearn developers
Licensed under the 3clause BSD License.
http://scikitlearn.org/stable/modules/generated/sklearn.metrics.hinge_loss.html