class sklearn.feature_selection.RFE(estimator, n_features_to_select=None, step=1, verbose=0)
[source]
Feature ranking with recursive feature elimination.
Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), the goal of recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_
attribute or through a feature_importances_
attribute. Then, the least important features are pruned from current set of features. That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.
Read more in the User Guide.
Parameters: 


Attributes: 

See also
RFECV
[1]  Guyon, I., Weston, J., Barnhill, S., & Vapnik, V., “Gene selection for cancer classification using support vector machines”, Mach. Learn., 46(13), 389–422, 2002. 
The following example shows how to retrieve the 5 right informative features in the Friedman #1 dataset.
>>> from sklearn.datasets import make_friedman1 >>> from sklearn.feature_selection import RFE >>> from sklearn.svm import SVR >>> X, y = make_friedman1(n_samples=50, n_features=10, random_state=0) >>> estimator = SVR(kernel="linear") >>> selector = RFE(estimator, 5, step=1) >>> selector = selector.fit(X, y) >>> selector.support_ array([ True, True, True, True, True, False, False, False, False, False]) >>> selector.ranking_ array([1, 1, 1, 1, 1, 6, 4, 3, 2, 5])
decision_function (X)  Compute the decision function of X . 
fit (X, y)  Fit the RFE model and then the underlying estimator on the selected features. 
fit_transform (X[, y])  Fit to data, then transform it. 
get_params ([deep])  Get parameters for this estimator. 
get_support ([indices])  Get a mask, or integer index, of the features selected 
inverse_transform (X)  Reverse the transformation operation 
predict (X)  Reduce X to the selected features and then predict using the underlying estimator. 
predict_log_proba (X)  Predict class logprobabilities for X. 
predict_proba (X)  Predict class probabilities for X. 
score (X, y)  Reduce X to the selected features and then return the score of the underlying estimator. 
set_params (**params)  Set the parameters of this estimator. 
transform (X)  Reduce X to the selected features. 
__init__(estimator, n_features_to_select=None, step=1, verbose=0)
[source]
decision_function(X)
[source]
Compute the decision function of X
.
Parameters: 


Returns: 

fit(X, y)
[source]
Parameters: 


fit_transform(X, y=None, **fit_params)
[source]
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters: 


Returns: 

get_params(deep=True)
[source]
Get parameters for this estimator.
Parameters: 


Returns: 

get_support(indices=False)
[source]
Get a mask, or integer index, of the features selected
Parameters: 


Returns: 

inverse_transform(X)
[source]
Reverse the transformation operation
Parameters: 


Returns: 

predict(X)
[source]
Parameters: 


Returns: 

predict_log_proba(X)
[source]
Predict class logprobabilities for X.
Parameters: 


Returns: 

predict_proba(X)
[source]
Predict class probabilities for X.
Parameters: 


Returns: 

score(X, y)
[source]
Parameters: 


set_params(**params)
[source]
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter>
so that it’s possible to update each component of a nested object.
Returns: 


transform(X)
[source]
Reduce X to the selected features.
Parameters: 


Returns: 

sklearn.feature_selection.RFE
© 2007–2018 The scikitlearn developers
Licensed under the 3clause BSD License.
http://scikitlearn.org/stable/modules/generated/sklearn.feature_selection.RFE.html