Compute the balanced accuracy.
The balanced accuracy in binary and multiclass classification problems to deal with imbalanced datasets. It is defined as the average of recall obtained on each class.
The best value is 1 and the worst value is 0 when adjusted=False.
Read more in the User Guide.
Added in version 0.20.
Ground truth (correct) target values.
Estimated targets as returned by a classifier.
Sample weights.
When true, the result is adjusted for chance, so that random performance would score 0, while keeping perfect performance at a score of 1.
Balanced accuracy score.
See also
average_precision_scoreCompute average precision (AP) from prediction scores.
precision_scoreCompute the precision score.
recall_scoreCompute the recall score.
roc_auc_scoreCompute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
Some literature promotes alternative definitions of balanced accuracy. Our definition is equivalent to accuracy_score with class-balanced sample weights, and shares desirable properties with the binary case. See the User Guide.
Brodersen, K.H.; Ong, C.S.; Stephan, K.E.; Buhmann, J.M. (2010). The balanced accuracy and its posterior distribution. Proceedings of the 20th International Conference on Pattern Recognition, 3121-24.
John. D. Kelleher, Brian Mac Namee, Aoife D’Arcy, (2015). Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies.
>>> from sklearn.metrics import balanced_accuracy_score >>> y_true = [0, 1, 0, 0, 1, 0] >>> y_pred = [0, 1, 0, 0, 0, 1] >>> balanced_accuracy_score(y_true, y_pred) np.float64(0.625)
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.metrics.balanced_accuracy_score.html