sklearn.metrics.cohen_kappa_score(y1, y2, labels=None, weights=None, sample_weight=None)
[source]
Cohen’s kappa: a statistic that measures inter-annotator agreement.
This function computes Cohen’s kappa [1], a score that expresses the level of agreement between two annotators on a classification problem. It is defined as
where \(p_o\) is the empirical probability of agreement on the label assigned to any sample (the observed agreement ratio), and \(p_e\) is the expected agreement when both annotators assign labels randomly. \(p_e\) is estimated using a per-annotator empirical prior over the class labels [2].
Read more in the User Guide.
Parameters: |
|
---|---|
Returns: |
|
[1] | (1, 2) J. Cohen (1960). “A coefficient of agreement for nominal scales”. Educational and Psychological Measurement 20(1):37-46. doi:10.1177/001316446002000104. |
[2] | (1, 2) R. Artstein and M. Poesio (2008). “Inter-coder agreement for computational linguistics”. Computational Linguistics 34(4):555-596. |
[3] | Wikipedia entry for the Cohen’s kappa. |
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.cohen_kappa_score.html