sklearn.preprocessing.power_transform(X, method=’boxcox’, standardize=True, copy=True)
[source]
Apply a power transform featurewise to make data more Gaussianlike.
Power transforms are a family of parametric, monotonic transformations that are applied to make data more Gaussianlike. This is useful for modeling issues related to heteroscedasticity (nonconstant variance), or other situations where normality is desired.
Currently, power_transform() supports the BoxCox transform. BoxCox requires input data to be strictly positive. The optimal parameter for stabilizing variance and minimizing skewness is estimated through maximum likelihood.
By default, zeromean, unitvariance normalization is applied to the transformed data.
Read more in the User Guide.
Parameters: 


See also
PowerTransformer
Transformer
API (as part of a preprocessing sklearn.pipeline.Pipeline
).quantile_transform
output_distribution=’normal’
.NaNs are treated as missing values: disregarded to compute the statistics, and maintained during the data transformation.
For a comparison of the different scalers, transformers, and normalizers, see examples/preprocessing/plot_all_scaling.py.
G.E.P. Box and D.R. Cox, “An Analysis of Transformations”, Journal of the Royal Statistical Society B, 26, 211252 (1964).
>>> import numpy as np >>> from sklearn.preprocessing import power_transform >>> data = [[1, 2], [3, 2], [4, 5]] >>> print(power_transform(data)) [[1.332... 0.707...] [ 0.256... 0.707...] [ 1.076... 1.414...]]
© 2007–2018 The scikitlearn developers
Licensed under the 3clause BSD License.
http://scikitlearn.org/stable/modules/generated/sklearn.preprocessing.power_transform.html