Provides train/test indices to split data in train/test sets. Each sample is used once as a test set (singleton) while the remaining samples form the training set.
LeaveOneOut() is equivalent to
n is the number of samples.
Due to the high number of test sets (which is the same as the number of samples) this cross-validation method can be very costly. For large datasets one should favor
Read more in the User Guide.
>>> from sklearn.model_selection import LeaveOneOut >>> X = np.array([[1, 2], [3, 4]]) >>> y = np.array([1, 2]) >>> loo = LeaveOneOut() >>> loo.get_n_splits(X) 2 >>> print(loo) LeaveOneOut() >>> for train_index, test_index in loo.split(X): ... print("TRAIN:", train_index, "TEST:", test_index) ... X_train, X_test = X[train_index], X[test_index] ... y_train, y_test = y[train_index], y[test_index] ... print(X_train, X_test, y_train, y_test) TRAIN:  TEST:  [[3 4]] [[1 2]]   TRAIN:  TEST:  [[1 2]] [[3 4]]  
||Returns the number of splitting iterations in the cross-validator|
||Generate indices to split data into training and test set.|
get_n_splits(X, y=None, groups=None)
Returns the number of splitting iterations in the cross-validator
split(X, y=None, groups=None)
Generate indices to split data into training and test set.
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License.