Permutation importance for feature evaluation [BRE].
The estimator is required to be a fitted estimator. X can be the data set used to train the estimator or a hold-out set. The permutation importance of a feature is calculated as follows. First, a baseline metric, defined by scoring, is evaluated on a (potentially different) dataset defined by the X. Next, a feature column from the validation set is permuted and the metric is evaluated again. The permutation importance is defined to be the difference between the baseline metric and metric from permutating the feature column.
Read more in the User Guide.
An estimator that has already been fitted and is compatible with scorer.
Data on which permutation importance will be computed.
Targets for supervised or None for unsupervised.
Scorer to use. If scoring represents a single score, one can use:
If scoring represents multiple scores, one can use:
Passing multiple scores to scoring is more efficient than calling permutation_importance for each of the scores as it reuses predictions to avoid redundant computation.
If None, the estimator’s default scorer is used.
Number of times to permute a feature.
Number of jobs to run in parallel. The computation is done by computing permutation score for each columns and parallelized over the columns. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
Pseudo-random number generator to control the permutations of each feature. Pass an int to get reproducible results across function calls. See Glossary.
Sample weights used in scoring.
Added in version 0.24.
The number of samples to draw from X to compute feature importance in each repeat (without replacement).
max_samples samples.max_samples * X.shape[0] samples.max_samples is equal to 1.0 or X.shape[0], all samples will be used.While using this option may provide less accurate importance estimates, it keeps the method tractable when evaluating feature importance on large datasets. In combination with n_repeats, this allows to control the computational speed vs statistical accuracy trade-off of this method.
Added in version 1.0.
Bunch or dict of such instances
Dictionary-like object, with the following attributes.
Mean of feature importance over n_repeats.
Standard deviation over n_repeats.
Raw permutation importance scores.
If there are multiple scoring metrics in the scoring parameter result is a dict with scorer names as keys (e.g. ‘roc_auc’) and Bunch objects like above as values.
>>> from sklearn.linear_model import LogisticRegression >>> from sklearn.inspection import permutation_importance >>> X = [[1, 9, 9],[1, 9, 9],[1, 9, 9], ... [0, 9, 9],[0, 9, 9],[0, 9, 9]] >>> y = [1, 1, 1, 0, 0, 0] >>> clf = LogisticRegression().fit(X, y) >>> result = permutation_importance(clf, X, y, n_repeats=10, ... random_state=0) >>> result.importances_mean array([0.4666..., 0. , 0. ]) >>> result.importances_std array([0.2211..., 0. , 0. ])
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.inspection.permutation_importance.html