class sklearn.model_selection.LeaveOneOut
[source]
Leave-One-Out cross-validator
Provides train/test indices to split data in train/test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set.
Note: LeaveOneOut()
is equivalent to KFold(n_splits=n)
and LeavePOut(p=1)
where n
is the number of samples.
Due to the high number of test sets (which is the same as the number of samples) this cross-validation method can be very costly. For large datasets one should favor KFold
, ShuffleSplit
or StratifiedKFold
.
Read more in the User Guide.
See also
LeaveOneGroupOut
GroupKFold
>>> from sklearn.model_selection import LeaveOneOut >>> X = np.array([[1, 2], [3, 4]]) >>> y = np.array([1, 2]) >>> loo = LeaveOneOut() >>> loo.get_n_splits(X) 2 >>> print(loo) LeaveOneOut() >>> for train_index, test_index in loo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = X[train_index], X[test_index] ... y_train, y_test = y[train_index], y[test_index] ... print(X_train, X_test, y_train, y_test) TRAIN: [1] TEST: [0] [[3 4]] [[1 2]] [2] [1] TRAIN: [0] TEST: [1] [[1 2]] [[3 4]] [1] [2]
get_n_splits (X[, y, groups]) | Returns the number of splitting iterations in the cross-validator |
split (X[, y, groups]) | Generate indices to split data into training and test set. |
__init__()
[source]
get_n_splits(X, y=None, groups=None)
[source]
Returns the number of splitting iterations in the cross-validator
Parameters: |
|
---|---|
Returns: |
|
split(X, y=None, groups=None)
[source]
Generate indices to split data into training and test set.
Parameters: |
|
---|---|
Yields: |
|
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.html