Isomap Embedding.
Non-linear dimensionality reduction through Isometric Mapping
Read more in the User Guide.
Number of neighbors to consider for each point. If n_neighbors is an int, then radius must be None.
Limiting distance of neighbors to return. If radius is a float, then n_neighbors must be set to None.
Added in version 1.1.
Number of coordinates for the manifold.
‘auto’ : Attempt to choose the most efficient solver for the given problem.
‘arpack’ : Use Arnoldi decomposition to find the eigenvalues and eigenvectors.
‘dense’ : Use a direct solver (i.e. LAPACK) for the eigenvalue decomposition.
Convergence tolerance passed to arpack or lobpcg. not used if eigen_solver == ‘dense’.
Maximum number of iterations for the arpack solver. not used if eigen_solver == ‘dense’.
Method to use in finding shortest path.
‘auto’ : attempt to choose the best algorithm automatically.
‘FW’ : Floyd-Warshall algorithm.
‘D’ : Dijkstra’s algorithm.
Algorithm to use for nearest neighbors search, passed to neighbors.NearestNeighbors instance.
The number of parallel jobs to run. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.
The metric to use when calculating distance between instances in a feature array. If metric is a string or callable, it must be one of the options allowed by sklearn.metrics.pairwise_distances for its metric parameter. If metric is “precomputed”, X is assumed to be a distance matrix and must be square. X may be a Glossary.
Added in version 0.22.
Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
Added in version 0.22.
Additional keyword arguments for the metric function.
Added in version 0.22.
Stores the embedding vectors.
KernelPCA object used to implement the embedding.
Stores nearest neighbors instance, including BallTree or KDtree if applicable.
Stores the geodesic distance matrix of training data.
Number of features seen during fit.
Added in version 0.24.
n_features_in_,)
Names of features seen during fit. Defined only when X has feature names that are all strings.
Added in version 1.0.
See also
sklearn.decomposition.PCAPrincipal component analysis that is a linear dimensionality reduction method.
sklearn.decomposition.KernelPCANon-linear dimensionality reduction using kernels and PCA.
MDSManifold learning using multidimensional scaling.
TSNET-distributed Stochastic Neighbor Embedding.
LocallyLinearEmbeddingManifold learning using Locally Linear Embedding.
SpectralEmbeddingSpectral embedding for non-linear dimensionality.
Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 290 (5500)
>>> from sklearn.datasets import load_digits >>> from sklearn.manifold import Isomap >>> X, _ = load_digits(return_X_y=True) >>> X.shape (1797, 64) >>> embedding = Isomap(n_components=2) >>> X_transformed = embedding.fit_transform(X[:100]) >>> X_transformed.shape (100, 2)
Compute the embedding vectors for data X.
Sample data, shape = (n_samples, n_features), in the form of a numpy array, sparse matrix, precomputed tree, or NearestNeighbors object.
Not used, present for API consistency by convention.
Returns a fitted instance of self.
Fit the model from data in X and transform X.
Training vector, where n_samples is the number of samples and n_features is the number of features.
Not used, present for API consistency by convention.
X transformed in the new space.
Get output feature names for transformation.
The feature names out will prefixed by the lowercased class name. For example, if the transformer outputs 3 features, then the feature names out are: ["class_name0", "class_name1", "class_name2"].
Only used to validate feature names with the names seen in fit.
Transformed feature names.
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
A MetadataRequest encapsulating routing information.
Get parameters for this estimator.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Compute the reconstruction error for the embedding.
Reconstruction error.
The cost function of an isomap embedding is
E = frobenius_norm[K(D) - K(D_fit)] / n_samples
Where D is the matrix of distances for the input data X, D_fit is the matrix of distances for the output embedding X_fit, and K is the isomap kernel:
K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples)
Set output container.
See Introducing the set_output API for an example on how to use the API.
Configure output of transform and fit_transform.
"default": Default output format of a transformer"pandas": DataFrame output"polars": Polars outputNone: Transform configuration is unchangedAdded in version 1.4: "polars" option was added.
Estimator instance.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Estimator parameters.
Estimator instance.
Transform X.
This is implemented by linking the points X into the graph of geodesic distances of the training data. First the n_neighbors nearest neighbors of X are found in the training data, and from these the shortest geodesic distances from each point in X to each point in the training data are computed in order to construct the kernel. The embedding of X is the projection of this kernel onto the embedding vectors of the training set.
If neighbors_algorithm=’precomputed’, X is assumed to be a distance matrix or a sparse graph of shape (n_queries, n_samples_fit).
X transformed in the new space.
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.manifold.Isomap.html