sklearn.metrics.log_loss(y_true, y_pred, eps=1e15, normalize=True, sample_weight=None, labels=None)
[source]
Log loss, aka logistic loss or crossentropy loss.
This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as the negative loglikelihood of the true labels given a probabilistic classifier’s predictions. The log loss is only defined for two or more labels. For a single sample with true label yt in {0,1} and estimated probability yp that yt = 1, the log loss is
log P(ytyp) = (yt log(yp) + (1  yt) log(1  yp))Read more in the User Guide.
Parameters: 


Returns: 

The logarithm used is the natural logarithm (basee).
C.M. Bishop (2006). Pattern Recognition and Machine Learning. Springer, p. 209.
>>> log_loss(["spam", "ham", "ham", "spam"], ... [[.1, .9], [.9, .1], [.8, .2], [.35, .65]]) 0.21616...
sklearn.metrics.log_loss
© 2007–2018 The scikitlearn developers
Licensed under the 3clause BSD License.
http://scikitlearn.org/stable/modules/generated/sklearn.metrics.log_loss.html