sklearn.metrics.precision_recall_curve(y_true, probas_pred, pos_label=None, sample_weight=None)
[source]
Compute precision-recall pairs for different probability thresholds
Note: this implementation is restricted to the binary classification task.
The precision is the ratio tp / (tp + fp)
where tp
is the number of true positives and fp
the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The recall is the ratio tp / (tp + fn)
where tp
is the number of true positives and fn
the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples.
The last precision and recall values are 1. and 0. respectively and do not have a corresponding threshold. This ensures that the graph starts on the y axis.
Read more in the User Guide.
Parameters: |
|
---|---|
Returns: |
|
See also
average_precision_score
roc_curve
>>> import numpy as np >>> from sklearn.metrics import precision_recall_curve >>> y_true = np.array([0, 0, 1, 1]) >>> y_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> precision, recall, thresholds = precision_recall_curve( ... y_true, y_scores) >>> precision array([0.66666667, 0.5 , 1. , 1. ]) >>> recall array([1. , 0.5, 0.5, 0. ]) >>> thresholds array([0.35, 0.4 , 0.8 ])
sklearn.metrics.precision_recall_curve
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_curve.html