Validation Curve visualization.
It is recommended to use from_estimator to create a ValidationCurveDisplay instance. All parameters are stored as attributes.
Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.
Added in version 1.3.
Name of the parameter that has been varied.
The values of the parameter that have been evaluated.
Scores on training sets.
Scores on test set.
The name of the score used in validation_curve. It will override the name inferred from the scoring parameter. If score is None, we use "Score" if negate_score is False and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is False or just remove it otherwise.
Axes with the validation curve.
Figure containing the validation curve.
When the std_display_style is "errorbar", this is a list of matplotlib.container.ErrorbarContainer objects. If another style is used, errorbar_ is None.
When the std_display_style is "fill_between", this is a list of matplotlib.lines.Line2D objects corresponding to the mean train and test scores. If another style is used, line_ is None.
When the std_display_style is "fill_between", this is a list of matplotlib.collections.PolyCollection objects. If another style is used, fill_between_ is None.
See also
sklearn.model_selection.validation_curveCompute the validation curve.
>>> import numpy as np >>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import ValidationCurveDisplay, validation_curve >>> from sklearn.linear_model import LogisticRegression >>> X, y = make_classification(n_samples=1_000, random_state=0) >>> logistic_regression = LogisticRegression() >>> param_name, param_range = "C", np.logspace(-8, 3, 10) >>> train_scores, test_scores = validation_curve( ... logistic_regression, X, y, param_name=param_name, param_range=param_range ... ) >>> display = ValidationCurveDisplay( ... param_name=param_name, param_range=param_range, ... train_scores=train_scores, test_scores=test_scores, score_name="Score" ... ) >>> display.plot() <...> >>> plt.show()
Create a validation curve display from an estimator.
Read more in the User Guide for general information about the visualization API and detailed documentation regarding the validation curve visualization.
An object of that type which is cloned for each validation.
Training data, where n_samples is the number of samples and n_features is the number of features.
Target relative to X for classification or regression; None for unsupervised learning.
Name of the parameter that will be varied.
The values of the parameter that will be evaluated.
Group labels for the samples used while splitting the dataset into train/test set. Only used in conjunction with a “Group” cv instance (e.g., GroupKFold).
Determines the cross-validation splitting strategy. Possible inputs for cv are:
(Stratified)KFold,For int/None inputs, if the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. In all other cases, KFold is used. These splitters are instantiated with shuffle=False so the splits will be the same across calls.
Refer User Guide for the various cross-validation strategies that can be used here.
A string (see The scoring parameter: defining model evaluation rules) or a scorer callable object / function with signature scorer(estimator, X, y) (see Callable scorers).
Number of jobs to run in parallel. Training the estimator and computing the score are parallelized over the different training and test sets. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
Number of predispatched jobs for parallel execution (default is all). The option can reduce the allocated memory. The str can be an expression like ‘2*n_jobs’.
Controls the verbosity: the higher, the more messages.
Value to assign to the score if an error occurs in estimator fitting. If set to ‘raise’, the error is raised. If a numeric value is given, FitFailedWarning is raised.
Parameters to pass to the fit method of the estimator.
Axes object to plot on. If None, a new figure and axes is created.
Whether or not to negate the scores obtained through validation_curve. This is particularly useful when using the error denoted by neg_* in scikit-learn.
The name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is None, we use "Score" if negate_score is False and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is False or just remove it otherwise.
The type of score to plot. Can be one of "test", "train", or "both".
The style used to display the score standard deviation around the mean score. If None, no representation of the standard deviation is displayed.
Additional keyword arguments passed to the plt.plot used to draw the mean score.
Additional keyword arguments passed to the plt.fill_between used to draw the score standard deviation.
Additional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
ValidationCurveDisplay
Object that stores computed values.
>>> import numpy as np >>> import matplotlib.pyplot as plt >>> from sklearn.datasets import make_classification >>> from sklearn.model_selection import ValidationCurveDisplay >>> from sklearn.linear_model import LogisticRegression >>> X, y = make_classification(n_samples=1_000, random_state=0) >>> logistic_regression = LogisticRegression() >>> param_name, param_range = "C", np.logspace(-8, 3, 10) >>> ValidationCurveDisplay.from_estimator( ... logistic_regression, X, y, param_name=param_name, ... param_range=param_range, ... ) <...> >>> plt.show()
Plot visualization.
Axes object to plot on. If None, a new figure and axes is created.
Whether or not to negate the scores obtained through validation_curve. This is particularly useful when using the error denoted by neg_* in scikit-learn.
The name of the score used to decorate the y-axis of the plot. It will override the name inferred from the scoring parameter. If score is None, we use "Score" if negate_score is False and "Negative score" otherwise. If scoring is a string or a callable, we infer the name. We replace _ by spaces and capitalize the first letter. We remove neg_ and replace it by "Negative" if negate_score is False or just remove it otherwise.
The type of score to plot. Can be one of "test", "train", or "both".
The style used to display the score standard deviation around the mean score. If None, no standard deviation representation is displayed.
Additional keyword arguments passed to the plt.plot used to draw the mean score.
Additional keyword arguments passed to the plt.fill_between used to draw the score standard deviation.
Additional keyword arguments passed to the plt.errorbar used to draw mean score and standard deviation score.
ValidationCurveDisplay
Object that stores computed values.
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.model_selection.ValidationCurveDisplay.html