Classifier that post-tunes the decision threshold using cross-validation.
This estimator post-tunes the decision threshold (cut-off point) that is used for converting posterior probability estimates (i.e. output of predict_proba) or decision scores (i.e. output of decision_function) into a class label. The tuning is done by optimizing a binary metric, potentially constrained by a another metric.
Read more in the User Guide.
Added in version 1.5.
The classifier, fitted or not, for which we want to optimize the decision threshold used during predict.
The objective metric to be optimized. Can be one of:
make_scorer;Methods by the classifier estimator corresponding to the decision function for which we want to find a threshold. It can be:
"auto", it will try to invoke, for each classifier, "predict_proba" or "decision_function" in that order."predict_proba" or "decision_function". If the method is not implemented by the classifier, it will raise an error.The number of decision threshold to use when discretizing the output of the classifier method. Pass an array-like to manually specify the thresholds to use.
Determines the cross-validation splitting strategy to train classifier. Possible inputs for cv are:
None, to use the default 5-fold stratified K-fold cross validation;"prefit", to bypass the cross-validation.Refer User Guide for the various cross-validation strategies that can be used here.
Warning
Using cv="prefit" and passing the same dataset for fitting estimator and tuning the cut-off point is subject to undesired overfitting. You can refer to Consideration regarding model refitting and cross-validation for an example.
This option should only be used when the set used to fit estimator is different from the one used to tune the cut-off point (by calling TunedThresholdClassifierCV.fit).
Whether or not to refit the classifier on the entire training set once the decision threshold has been found. Note that forcing refit=False on cross-validation having more than a single split will raise an error. Similarly, refit=True in conjunction with cv="prefit" will raise an error.
The number of jobs to run in parallel. When cv represents a cross-validation strategy, the fitting and scoring on each data split is done in parallel. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
Controls the randomness of cross-validation when cv is a float. See Glossary.
Whether to store all scores and thresholds computed during the cross-validation process.
The fitted classifier used when predicting.
The new decision threshold.
The optimal score of the objective metric, evaluated at best_threshold_.
A dictionary containing the scores and thresholds computed during the cross-validation process. Only exist if store_cv_results=True. The keys are "thresholds" and "scores".
classes_ndarray of shape (n_classes,)
Classes labels.
Number of features seen during fit. Only defined if the underlying estimator exposes such an attribute when fit.
n_features_in_,)
Names of features seen during fit. Only defined if the underlying estimator exposes such an attribute when fit.
See also
sklearn.model_selection.FixedThresholdClassifierClassifier that uses a constant threshold.
sklearn.calibration.CalibratedClassifierCVEstimator that calibrates probabilities.
>>> from sklearn.datasets import make_classification
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.metrics import classification_report
>>> from sklearn.model_selection import TunedThresholdClassifierCV, train_test_split
>>> X, y = make_classification(
... n_samples=1_000, weights=[0.9, 0.1], class_sep=0.8, random_state=42
... )
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, stratify=y, random_state=42
... )
>>> classifier = RandomForestClassifier(random_state=0).fit(X_train, y_train)
>>> print(classification_report(y_test, classifier.predict(X_test)))
precision recall f1-score support
0 0.94 0.99 0.96 224
1 0.80 0.46 0.59 26
accuracy 0.93 250
macro avg 0.87 0.72 0.77 250
weighted avg 0.93 0.93 0.92 250
>>> classifier_tuned = TunedThresholdClassifierCV(
... classifier, scoring="balanced_accuracy"
... ).fit(X_train, y_train)
>>> print(
... f"Cut-off point found at {classifier_tuned.best_threshold_:.3f}"
... )
Cut-off point found at 0.342
>>> print(classification_report(y_test, classifier_tuned.predict(X_test)))
precision recall f1-score support
0 0.96 0.95 0.96 224
1 0.61 0.65 0.63 26
accuracy 0.92 250
macro avg 0.78 0.80 0.79 250
weighted avg 0.92 0.92 0.92 250
Classes labels.
Decision function for samples in X using the fitted estimator.
Training vectors, where n_samples is the number of samples and n_features is the number of features.
The decision function computed the fitted estimator.
Fit the classifier.
Training data.
Target values.
Parameters to pass to the fit method of the underlying classifier.
Returns an instance of self.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRouter encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Predict the target of new samples.
The samples, as accepted by estimator.predict.
The predicted class.
Predict logarithm class probabilities for X using the fitted estimator.
Training vectors, where n_samples is the number of samples and n_features is the number of features.
The logarithm class probabilities of the input samples.
Predict class probabilities for X using the fitted estimator.
Training vectors, where n_samples is the number of samples and n_features is the number of features.
The class probabilities of the input samples.
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Test samples.
True labels for X.
Sample weights.
Mean accuracy of self.predict(X) w.r.t. y.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Request metadata passed to the score method.
Note that this method is only relevant if enable_metadata_routing=True (see sklearn.set_config). Please see User Guide on how the routing mechanism works.
The options for each parameter are:
True: metadata is requested, and passed to score if provided. The request is ignored if metadata is not provided.False: metadata is not requested and the meta-estimator will not pass it to score.None: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str: metadata should be passed to the meta-estimator with this given alias instead of the original name.The default (sklearn.utils.metadata_routing.UNCHANGED) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a Pipeline. Otherwise it has no effect.
Metadata routing for sample_weight parameter in score.
The updated object.
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.model_selection.TunedThresholdClassifierCV.html