Compute Cohen’s kappa: a statistic that measures inter-annotator agreement.
This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on a classification problem. It is defined as
where \(p_o\) is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and \(p_e\) is the expected agreement when both annotators assign labels randomly. \(p_e\) is estimated using a per-annotator empirical prior over the class labels [2].
Read more in the User Guide.
Labels assigned by the first annotator.
Labels assigned by the second annotator. The kappa statistic is symmetric, so swapping y1 and y2 doesn’t change the value.
List of labels to index the matrix. This may be used to select a subset of labels. If None, all labels that appear at least once in y1 or y2 are used.
Weighting type to calculate the score. None means not weighted; “linear” means linear weighting; “quadratic” means quadratic weighting.
Sample weights.
The kappa statistic, which is a number between -1 and 1. The maximum value means complete agreement; zero or lower means chance agreement.
>>> from sklearn.metrics import cohen_kappa_score >>> y1 = ["negative", "positive", "negative", "neutral", "positive"] >>> y2 = ["negative", "positive", "negative", "neutral", "negative"] >>> cohen_kappa_score(y1, y2) np.float64(0.6875)
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.metrics.cohen_kappa_score.html