The Exponentiation kernel takes one base kernel and a scalar parameter \(p\) and combines them via
Note that the __pow__ magic method is overridden, so Exponentiation(RBF(), 2) is equivalent to using the ** operator with RBF() ** 2.
Read more in the User Guide.
Added in version 0.18.
The base kernel
The exponent for the base kernel
>>> from sklearn.datasets import make_friedman2 >>> from sklearn.gaussian_process import GaussianProcessRegressor >>> from sklearn.gaussian_process.kernels import (RationalQuadratic, ... Exponentiation) >>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0) >>> kernel = Exponentiation(RationalQuadratic(), exponent=2) >>> gpr = GaussianProcessRegressor(kernel=kernel, alpha=5, ... random_state=0).fit(X, y) >>> gpr.score(X, y) 0.419... >>> gpr.predict(X[:1,:], return_std=True) (array([635.5...]), array([0.559...]))
Return the kernel k(X, Y) and optionally its gradient.
Left argument of the returned kernel k(X, Y)
Right argument of the returned kernel k(X, Y). If None, k(X, X) is evaluated instead.
Determines whether the gradient with respect to the log of the kernel hyperparameter is computed.
Kernel k(X, Y)
The gradient of the kernel k(X, X) with respect to the log of the hyperparameter of the kernel. Only returned when eval_gradient is True.
Returns the log-transformed bounds on the theta.
The log-transformed bounds on the kernel’s hyperparameters theta
Returns a clone of self with given hyperparameters theta.
The hyperparameters
Returns the diagonal of the kernel k(X, X).
The result of this method is identical to np.diag(self(X)); however, it can be evaluated more efficiently since only the diagonal is evaluated.
Argument to the kernel.
Diagonal of kernel k(X, X)
Get parameters of this kernel.
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
Returns a list of all hyperparameter.
Returns whether the kernel is stationary.
Returns the number of non-fixed hyperparameters of the kernel.
Returns whether the kernel is defined on discrete structures.
Set the parameters of this kernel.
The method works on simple kernels as well as on nested kernels. The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns the (flattened, log-transformed) non-fixed hyperparameters.
Note that theta are typically the log-transformed values of the kernel’s hyperparameters as this representation of the search space is more amenable for hyperparameter search, as hyperparameters like length-scales naturally live on a log-scale.
The non-fixed, log-transformed hyperparameters of the kernel
© 2007–2025 The scikit-learn developers
Licensed under the 3-clause BSD License.
https://scikit-learn.org/1.6/modules/generated/sklearn.gaussian_process.kernels.Exponentiation.html