class sklearn.ensemble.VotingClassifier(estimators, voting=’hard’, weights=None, n_jobs=None, flatten_transform=None)
[source]
Soft Voting/Majority Rule classifier for unfitted estimators.
New in version 0.17.
Read more in the User Guide.
Parameters: |
|
---|---|
Attributes: |
|
>>> import numpy as np >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.naive_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier, VotingClassifier >>> clf1 = LogisticRegression(solver='lbfgs', multi_class='multinomial', ... random_state=1) >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1) >>> clf3 = GaussianNB() >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> eclf1 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard') >>> eclf1 = eclf1.fit(X, y) >>> print(eclf1.predict(X)) [1 1 1 2 2 2] >>> np.array_equal(eclf1.named_estimators_.lr.predict(X), ... eclf1.named_estimators_['lr'].predict(X)) True >>> eclf2 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft') >>> eclf2 = eclf2.fit(X, y) >>> print(eclf2.predict(X)) [1 1 1 2 2 2] >>> eclf3 = VotingClassifier(estimators=[ ... ('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,1,1], ... flatten_transform=True) >>> eclf3 = eclf3.fit(X, y) >>> print(eclf3.predict(X)) [1 1 1 2 2 2] >>> print(eclf3.transform(X).shape) (6, 6) >>>
fit (X, y[, sample_weight]) | Fit the estimators. |
fit_transform (X[, y]) | Fit to data, then transform it. |
get_params ([deep]) | Get the parameters of the VotingClassifier |
predict (X) | Predict class labels for X. |
score (X, y[, sample_weight]) | Returns the mean accuracy on the given test data and labels. |
set_params (**params) | Setting the parameters for the voting classifier |
transform (X) | Return class labels or probabilities for X for each estimator. |
__init__(estimators, voting=’hard’, weights=None, n_jobs=None, flatten_transform=None)
[source]
fit(X, y, sample_weight=None)
[source]
Fit the estimators.
Parameters: |
|
---|---|
Returns: |
|
fit_transform(X, y=None, **fit_params)
[source]
Fit to data, then transform it.
Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.
Parameters: |
|
---|---|
Returns: |
|
get_params(deep=True)
[source]
Get the parameters of the VotingClassifier
Parameters: |
|
---|
predict(X)
[source]
Predict class labels for X.
Parameters: |
|
---|---|
Returns: |
|
predict_proba
Compute probabilities of possible outcomes for samples in X.
Parameters: |
|
---|---|
Returns: |
|
score(X, y, sample_weight=None)
[source]
Returns the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters: |
|
---|---|
Returns: |
|
set_params(**params)
[source]
Setting the parameters for the voting classifier
Valid parameter keys can be listed with get_params().
Parameters: |
|
---|
# In this example, the RandomForestClassifier is removed clf1 = LogisticRegression() clf2 = RandomForestClassifier() eclf = VotingClassifier(estimators=[(‘lr’, clf1), (‘rf’, clf2)] eclf.set_params(rf=None)
transform(X)
[source]
Return class labels or probabilities for X for each estimator.
Parameters: |
|
---|---|
Returns: |
|
sklearn.ensemble.VotingClassifier
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html