Compute error rates for different probability thresholds.
Note
This metric is used for evaluation of ranking and error tradeoffs of a binary classification task.
Read more in the User Guide.
Added in version 0.24.
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given.
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers). For decision_function scores, values greater than or equal to zero should indicate the positive class.
The label of the positive class. When pos_label=None, if y_true is in {-1, 1} or {0, 1}, pos_label is set to 1, otherwise an error will be raised.
Sample weights.
False positive rate (FPR) such that element i is the false positive rate of predictions with score >= thresholds[i]. This is occasionally referred to as false acceptance probability or fall-out.
False negative rate (FNR) such that element i is the false negative rate of predictions with score >= thresholds[i]. This is occasionally referred to as false rejection or miss rate.
Decreasing score values.
See also
DetCurveDisplay.from_estimatorPlot DET curve given an estimator and some data.
DetCurveDisplay.from_predictionsPlot DET curve given the true and predicted labels.
DetCurveDisplayDET curve visualization.
roc_curveCompute Receiver operating characteristic (ROC) curve.
precision_recall_curveCompute precision-recall curve.
>>> import numpy as np >>> from sklearn.metrics import det_curve >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, fnr, thresholds = det_curve(y_true, y_scores) >>> fpr array([0.5, 0.5, 0. ]) >>> fnr array([0. , 0.5, 0.5]) >>> thresholds array([0.35, 0.4 , 0.8 ])
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.metrics.det_curve.html