\(D^2\) regression score function, fraction of Tweedie deviance explained.
Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A model that always uses the empirical mean of y_true as constant prediction, disregarding the input features, gets a D^2 score of 0.0.
Read more in the User Guide.
Added in version 1.0.
Ground truth (correct) target values.
Estimated target values.
Sample weights.
Tweedie power parameter. Either power <= 0 or power >= 1.
The higher p the less weight is given to extreme deviations between true and predicted targets.
The D^2 score.
This is not a symmetric function.
Like R^2, D^2 score may be negative (it need not actually be the square of a quantity D).
This metric is not well-defined for single samples and will return a NaN value if n_samples is less than two.
Eq. (3.11) of Hastie, Trevor J., Robert Tibshirani and Martin J. Wainwright. “Statistical Learning with Sparsity: The Lasso and Generalizations.” (2015). https://hastie.su.domains/StatLearnSparsity/
>>> from sklearn.metrics import d2_tweedie_score >>> y_true = [0.5, 1, 2.5, 7] >>> y_pred = [1, 1, 5, 3.5] >>> d2_tweedie_score(y_true, y_pred) 0.285... >>> d2_tweedie_score(y_true, y_pred, power=1) 0.487... >>> d2_tweedie_score(y_true, y_pred, power=2) 0.630... >>> d2_tweedie_score(y_true, y_true, power=2) 1.0
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.metrics.d2_tweedie_score.html