Log loss, aka logistic loss or cross-entropy loss.
This is the loss function used in (multinomial) logistic regression and extensions of it such as neural networks, defined as the negative log-likelihood of a logistic model that returns y_pred probabilities for its training data y_true. The log loss is only defined for two or more labels. For a single sample with true label \(y \in \{0,1\}\) and a probability estimate \(p = \operatorname{Pr}(y = 1)\), the log loss is:
Read more in the User Guide.
Ground truth (correct) labels for n_samples samples.
Predicted probabilities, as returned by a classifier’s predict_proba method. If y_pred.shape = (n_samples,) the probabilities provided are assumed to be that of the positive class. The labels in y_pred are assumed to be ordered alphabetically, as done by LabelBinarizer.
y_pred values are clipped to [eps, 1-eps] where eps is the machine precision for y_pred’s dtype.
If true, return the mean loss per sample. Otherwise, return the sum of the per-sample losses.
Sample weights.
If not provided, labels will be inferred from y_true. If labels is None and y_pred has shape (n_samples,) the labels are assumed to be binary and are inferred from y_true.
Added in version 0.18.
Log loss, aka logistic loss or cross-entropy loss.
The logarithm used is the natural logarithm (base-e).
C.M. Bishop (2006). Pattern Recognition and Machine Learning. Springer, p. 209.
>>> from sklearn.metrics import log_loss >>> log_loss(["spam", "ham", "ham", "spam"], ... [[.1, .9], [.9, .1], [.8, .2], [.35, .65]]) 0.21616...
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.metrics.log_loss.html