W3cubDocs

/scikit-image

Module: restoration

Image restoration module.

skimage.restoration.wiener(image, psf, balance) Wiener-Hunt deconvolution
skimage.restoration.unsupervised_wiener(…) Unsupervised Wiener-Hunt deconvolution.
skimage.restoration.richardson_lucy(image, psf) Richardson-Lucy deconvolution.
skimage.restoration.unwrap_phase(image[, …]) Recover the original from a wrapped phase image.
skimage.restoration.denoise_tv_bregman(…) Perform total-variation denoising using split-Bregman optimization.
skimage.restoration.denoise_tv_chambolle(image) Perform total-variation denoising on n-dimensional images.
skimage.restoration.denoise_bilateral(image) Denoise image using bilateral filter.
skimage.restoration.denoise_wavelet(image[, …]) Perform wavelet denoising on an image.
skimage.restoration.denoise_nl_means(image) Perform non-local means denoising on 2-D or 3-D grayscale images, and 2-D RGB images.
skimage.restoration.estimate_sigma(image[, …]) Robust wavelet-based estimator of the (Gaussian) noise standard deviation.
skimage.restoration.inpaint_biharmonic(…) Inpaint masked points in image with biharmonic equations.
skimage.restoration.cycle_spin(x, func, …) Cycle spinning (repeatedly apply func to shifted versions of x).

wiener

skimage.restoration.wiener(image, psf, balance, reg=None, is_real=True, clip=True) [source]

Wiener-Hunt deconvolution

Return the deconvolution with a Wiener-Hunt approach (i.e. with Fourier diagonalisation).

Parameters:
image : (M, N) ndarray

Input degraded image

psf : ndarray

Point Spread Function. This is assumed to be the impulse response (input image space) if the data-type is real, or the transfer function (Fourier space) if the data-type is complex. There is no constraints on the shape of the impulse response. The transfer function must be of shape (M, N) if is_real is True, (M, N // 2 + 1) otherwise (see np.fft.rfftn).

balance : float

The regularisation parameter value that tunes the balance between the data adequacy that improve frequency restoration and the prior adequacy that reduce frequency restoration (to avoid noise artifacts).

reg : ndarray, optional

The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. Shape constraint is the same as for the psf parameter.

is_real : boolean, optional

True by default. Specify if psf and reg are provided with hermitian hypothesis, that is only half of the frequency plane is provided (due to the redundancy of Fourier transform of real signal). It’s apply only if psf and/or reg are provided as transfer function. For the hermitian property see uft module or np.fft.rfftn.

clip : boolean, optional

True by default. If True, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

Returns:
im_deconv : (M, N) ndarray

The deconvolved image.

Notes

This function applies the Wiener filter to a noisy and degraded image by an impulse response (or PSF). If the data model is

\[y = Hx + n\]

where \(n\) is noise, \(H\) the PSF and \(x\) the unknown original image, the Wiener filter is

\[\hat x = F^\dagger (|\Lambda_H|^2 + \lambda |\Lambda_D|^2) \Lambda_H^\dagger F y\]

where \(F\) and \(F^\dagger\) are the Fourier and inverse Fourier transfroms respectively, \(\Lambda_H\) the transfer function (or the Fourier transfrom of the PSF, see [Hunt] below) and \(\Lambda_D\) the filter to penalize the restored image frequencies (Laplacian by default, that is penalization of high frequency). The parameter \(\lambda\) tunes the balance between the data (that tends to increase high frequency, even those coming from noise), and the regularization.

These methods are then specific to a prior model. Consequently, the application or the true image nature must corresponds to the prior model. By default, the prior model (Laplacian) introduce image smoothness or pixel correlation. It can also be interpreted as high-frequency penalization to compensate the instability of the solution with respect to the data (sometimes called noise amplification or “explosive” solution).

Finally, the use of Fourier space implies a circulant property of \(H\), see [Hunt].

References

[1]

François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010)

http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-27-7-1593

http://research.orieux.fr/files/papers/OGR-JOSA10.pdf

[2] B. R. Hunt “A matrix theory proof of the discrete convolution theorem”, IEEE Trans. on Audio and Electroacoustics, vol. au-19, no. 4, pp. 285-288, dec. 1971

Examples

>>> from skimage import color, data, restoration
>>> img = color.rgb2gray(data.astronaut())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> img = convolve2d(img, psf, 'same')
>>> img += 0.1 * img.std() * np.random.standard_normal(img.shape)
>>> deconvolved_img = restoration.wiener(img, psf, 1100)

unsupervised_wiener

skimage.restoration.unsupervised_wiener(image, psf, reg=None, user_params=None, is_real=True, clip=True) [source]

Unsupervised Wiener-Hunt deconvolution.

Return the deconvolution with a Wiener-Hunt approach, where the hyperparameters are automatically estimated. The algorithm is a stochastic iterative process (Gibbs sampler) described in the reference below. See also wiener function.

Parameters:
image : (M, N) ndarray

The input degraded image.

psf : ndarray

The impulse response (input image’s space) or the transfer function (Fourier space). Both are accepted. The transfer function is automatically recognized as being complex (np.iscomplexobj(psf)).

reg : ndarray, optional

The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf.

user_params : dict

Dictionary of parameters for the Gibbs sampler. See below.

clip : boolean, optional

True by default. If true, pixel values of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

Returns:
x_postmean : (M, N) ndarray

The deconvolved image (the posterior mean).

chains : dict

The keys noise and prior contain the chain list of noise and prior precision respectively.

Other Parameters:
The keys of ``user_params`` are:
threshold : float

The stopping criterion: the norm of the difference between to successive approximated solution (empirical mean of object samples, see Notes section). 1e-4 by default.

burnin : int

The number of sample to ignore to start computation of the mean. 15 by default.

min_iter : int

The minimum number of iterations. 30 by default.

max_iter : int

The maximum number of iterations if threshold is not satisfied. 200 by default.

callback : callable (None by default)

A user provided callable to which is passed, if the function exists, the current image sample for whatever purpose. The user can store the sample, or compute other moments than the mean. It has no influence on the algorithm execution and is only for inspection.

Notes

The estimated image is design as the posterior mean of a probability law (from a Bayesian analysis). The mean is defined as a sum over all the possible images weighted by their respective probability. Given the size of the problem, the exact sum is not tractable. This algorithm use of MCMC to draw image under the posterior law. The practical idea is to only draw highly probable images since they have the biggest contribution to the mean. At the opposite, the less probable images are drawn less often since their contribution is low. Finally the empirical mean of these samples give us an estimation of the mean, and an exact computation with an infinite sample set.

References

[1]

François Orieux, Jean-François Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for Wiener-Hunt deconvolution”, J. Opt. Soc. Am. A 27, 1593-1607 (2010)

http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa-27-7-1593

http://research.orieux.fr/files/papers/OGR-JOSA10.pdf

Examples

>>> from skimage import color, data, restoration
>>> img = color.rgb2gray(data.astronaut())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> img = convolve2d(img, psf, 'same')
>>> img += 0.1 * img.std() * np.random.standard_normal(img.shape)
>>> deconvolved_img = restoration.unsupervised_wiener(img, psf)

richardson_lucy

skimage.restoration.richardson_lucy(image, psf, iterations=50, clip=True) [source]

Richardson-Lucy deconvolution.

Parameters:
image : ndarray

Input degraded image (can be N dimensional).

psf : ndarray

The point spread function.

iterations : int

Number of iterations. This parameter plays the role of regularisation.

clip : boolean, optional

True by default. If true, pixel value of the result above 1 or under -1 are thresholded for skimage pipeline compatibility.

Returns:
im_deconv : ndarray

The deconvolved image.

References

[1] http://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconvolution

Examples

>>> from skimage import color, data, restoration
>>> camera = color.rgb2gray(data.camera())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> camera = convolve2d(camera, psf, 'same')
>>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape)
>>> deconvolved = restoration.richardson_lucy(camera, psf, 5)

unwrap_phase

skimage.restoration.unwrap_phase(image, wrap_around=False, seed=None) [source]

Recover the original from a wrapped phase image.

From an image wrapped to lie in the interval [-pi, pi), recover the original, unwrapped image.

Parameters:
image : 1D, 2D or 3D ndarray of floats, optionally a masked array

The values should be in the range [-pi, pi). If a masked array is provided, the masked entries will not be changed, and their values will not be used to guide the unwrapping of neighboring, unmasked values. Masked 1D arrays are not allowed, and will raise a ValueError.

wrap_around : bool or sequence of bool, optional

When an element of the sequence is True, the unwrapping process will regard the edges along the corresponding axis of the image to be connected and use this connectivity to guide the phase unwrapping process. If only a single boolean is given, it will apply to all axes. Wrap around is not supported for 1D arrays.

seed : int, optional

Unwrapping 2D or 3D images uses random initialization. This sets the seed of the PRNG to achieve deterministic behavior.

Returns:
image_unwrapped : array_like, double

Unwrapped image of the same shape as the input. If the input image was a masked array, the mask will be preserved.

Raises:
ValueError

If called with a masked 1D array or called with a 1D array and wrap_around=True.

References

[1] Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor, and Munther A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path”, Journal Applied Optics, Vol. 41, No. 35 (2002) 7437,
[2] Abdul-Rahman, H., Gdeisat, M., Burton, D., & Lalor, M., “Fast three-dimensional phase-unwrapping algorithm based on sorting by reliability following a non-continuous path. In W. Osten, C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32–40, International Society for Optics and Photonics.

Examples

>>> c0, c1 = np.ogrid[-1:1:128j, -1:1:128j]
>>> image = 12 * np.pi * np.exp(-(c0**2 + c1**2))
>>> image_wrapped = np.angle(np.exp(1j * image))
>>> image_unwrapped = unwrap_phase(image_wrapped)
>>> np.std(image_unwrapped - image) < 1e-6   # A constant offset is normal
True

denoise_tv_bregman

skimage.restoration.denoise_tv_bregman(image, weight, max_iter=100, eps=0.001, isotropic=True) [source]

Perform total-variation denoising using split-Bregman optimization.

Total-variation denoising (also know as total-variation regularization) tries to find an image with less total-variation under the constraint of being similar to the input image, which is controlled by the regularization parameter ([1], [2], [3], [4]).

Parameters:
image : ndarray

Input data to be denoised (converted using img_as_float`).

weight : float

Denoising weight. The smaller the weight, the more denoising (at the expense of less similarity to the input). The regularization parameter lambda is chosen as 2 * weight.

eps : float, optional

Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when:

SUM((u(n) - u(n-1))**2) < eps
max_iter : int, optional

Maximal number of iterations used for the optimization.

isotropic : boolean, optional

Switch between isotropic and anisotropic TV denoising.

Returns:
u : ndarray

Denoised image.

References

[1] (1, 2) http://en.wikipedia.org/wiki/Total_variation_denoising
[2] (1, 2) Tom Goldstein and Stanley Osher, “The Split Bregman Method For L1 Regularized Problems”, ftp://ftp.math.ucla.edu/pub/camreport/cam08-29.pdf
[3] (1, 2) Pascal Getreuer, “Rudin–Osher–Fatemi Total Variation Denoising using Split Bregman” in Image Processing On Line on 2012–05–19, http://www.ipol.im/pub/art/2012/g-tvd/article_lr.pdf
[4] (1, 2) http://www.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf

denoise_tv_chambolle

skimage.restoration.denoise_tv_chambolle(image, weight=0.1, eps=0.0002, n_iter_max=200, multichannel=False) [source]

Perform total-variation denoising on n-dimensional images.

Parameters:
image : ndarray of ints, uints or floats

Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image.

weight : float, optional

Denoising weight. The greater weight, the more denoising (at the expense of fidelity to input).

eps : float, optional

Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when:

(E_(n-1) - E_n) < eps * E_0

n_iter_max : int, optional

Maximal number of iterations used for the optimization.

multichannel : bool, optional

Apply total-variation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension.

Returns:
out : ndarray

Denoised image.

Notes

Make sure to set the multichannel parameter appropriately for color images.

The principle of total variation denoising is explained in http://en.wikipedia.org/wiki/Total_variation_denoising

The principle of total variation denoising is to minimize the total variation of the image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce “cartoon-like” images, that is, piecewise-constant images.

This code is an implementation of the algorithm of Rudin, Fatemi and Osher that was proposed by Chambolle in [1].

References

[1] (1, 2) A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 89-97.

Examples

2D example on astronaut image:

>>> from skimage import color, data
>>> img = color.rgb2gray(data.astronaut())[:50, :50]
>>> img += 0.5 * img.std() * np.random.randn(*img.shape)
>>> denoised_img = denoise_tv_chambolle(img, weight=60)

3D example on synthetic data:

>>> x, y, z = np.ogrid[0:20, 0:20, 0:20]
>>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2
>>> mask = mask.astype(np.float)
>>> mask += 0.2*np.random.randn(*mask.shape)
>>> res = denoise_tv_chambolle(mask, weight=100)

denoise_bilateral

skimage.restoration.denoise_bilateral(image, win_size=None, sigma_color=None, sigma_spatial=1, bins=10000, mode='constant', cval=0, multichannel=None) [source]

Denoise image using bilateral filter.

This is an edge-preserving, denoising filter. It averages pixels based on their spatial closeness and radiometric similarity [1].

Spatial closeness is measured by the Gaussian function of the Euclidean distance between two pixels and a certain standard deviation (sigma_spatial).

Radiometric similarity is measured by the Gaussian function of the Euclidean distance between two color values and a certain standard deviation (sigma_color).

Parameters:
image : ndarray, shape (M, N[, 3])

Input image, 2D grayscale or RGB.

win_size : int

Window size for filtering. If win_size is not specified, it is calculated as max(5, 2 * ceil(3 * sigma_spatial) + 1).

sigma_color : float

Standard deviation for grayvalue/color distance (radiometric similarity). A larger value results in averaging of pixels with larger radiometric differences. Note, that the image will be converted using the img_as_float function and thus the standard deviation is in respect to the range [0, 1]. If the value is None the standard deviation of the image will be used.

sigma_spatial : float

Standard deviation for range distance. A larger value results in averaging of pixels with larger spatial differences.

bins : int

Number of discrete values for Gaussian weights of color filtering. A larger value results in improved accuracy.

mode : {‘constant’, ‘edge’, ‘symmetric’, ‘reflect’, ‘wrap’}

How to handle values outside the image borders. See numpy.pad for detail.

cval : string

Used in conjunction with mode ‘constant’, the value outside the image boundaries.

multichannel : bool

Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension.

Returns:
denoised : ndarray

Denoised image.

References

[1] (1, 2) http://users.soe.ucsc.edu/~manduchi/Papers/ICCV98.pdf

Examples

>>> from skimage import data, img_as_float
>>> astro = img_as_float(data.astronaut())
>>> astro = astro[220:300, 220:320]
>>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape)
>>> noisy = np.clip(noisy, 0, 1)
>>> denoised = denoise_bilateral(noisy, sigma_color=0.05, sigma_spatial=15)

denoise_wavelet

skimage.restoration.denoise_wavelet(image, sigma=None, wavelet='db1', mode='soft', wavelet_levels=None, multichannel=False, convert2ycbcr=False, method='BayesShrink') [source]

Perform wavelet denoising on an image.

Parameters:
image : ndarray ([M[, N[, …P]][, C]) of ints, uints or floats

Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image.

sigma : float or list, optional

The noise standard deviation used when computing the wavelet detail coefficient threshold(s). When None (default), the noise standard deviation is estimated via the method in [2].

wavelet : string, optional

The type of wavelet to perform and can be any of the options pywt.wavelist outputs. The default is ‘db1’. For example, wavelet can be any of {'db2', 'haar', 'sym9'} and many more.

mode : {‘soft’, ‘hard’}, optional

An optional argument to choose the type of denoising performed. It noted that choosing soft thresholding given additive noise finds the best approximation of the original image.

wavelet_levels : int or None, optional

The number of wavelet decomposition levels to use. The default is three less than the maximum number of possible decomposition levels.

multichannel : bool, optional

Apply wavelet denoising separately for each channel (where channels correspond to the final axis of the array).

convert2ycbcr : bool, optional

If True and multichannel True, do the wavelet denoising in the YCbCr colorspace instead of the RGB color space. This typically results in better performance for RGB images.

method : {‘BayesShrink’, ‘VisuShrink’}, optional

Thresholding method to be used. The currently supported methods are “BayesShrink” [1] and “VisuShrink” [2]. Defaults to “BayesShrink”.

Returns:
out : ndarray

Denoised image.

Notes

The wavelet domain is a sparse representation of the image, and can be thought of similarly to the frequency domain of the Fourier transform. Sparse representations have most values zero or near-zero and truly random noise is (usually) represented by many small values in the wavelet domain. Setting all values below some threshold to 0 reduces the noise in the image, but larger thresholds also decrease the detail present in the image.

If the input is 3D, this function performs wavelet denoising on each color plane separately. The output image is clipped between either [-1, 1] and [0, 1] depending on the input image range.

When YCbCr conversion is done, every color channel is scaled between 0 and 1, and sigma values are applied to these scaled color channels.

Many wavelet coefficient thresholding approaches have been proposed. By default, denoise_wavelet applies BayesShrink, which is an adaptive thresholding method that computes separate thresholds for each wavelet sub-band as described in [1].

If method == "VisuShrink", a single “universal threshold” is applied to all wavelet detail coefficients as described in [2]. This threshold is designed to remove all Gaussian noise at a given sigma with high probability, but tends to produce images that appear overly smooth.

References

[1] (1, 2, 3) Chang, S. Grace, Bin Yu, and Martin Vetterli. “Adaptive wavelet thresholding for image denoising and compression.” Image Processing, IEEE Transactions on 9.9 (2000): 1532-1546. DOI: 10.1109/83.862633
[2] (1, 2, 3, 4) D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI: 10.1093/biomet/81.3.425

Examples

>>> from skimage import color, data
>>> img = img_as_float(data.astronaut())
>>> img = color.rgb2gray(img)
>>> img += 0.1 * np.random.randn(*img.shape)
>>> img = np.clip(img, 0, 1)
>>> denoised_img = denoise_wavelet(img, sigma=0.1)

denoise_nl_means

skimage.restoration.denoise_nl_means(image, patch_size=7, patch_distance=11, h=0.1, multichannel=None, fast_mode=True, sigma=0.0) [source]

Perform non-local means denoising on 2-D or 3-D grayscale images, and 2-D RGB images.

Parameters:
image : 2D or 3D ndarray

Input image to be denoised, which can be 2D or 3D, and grayscale or RGB (for 2D images only, see multichannel parameter).

patch_size : int, optional

Size of patches used for denoising.

patch_distance : int, optional

Maximal distance in pixels where to search patches used for denoising.

h : float, optional

Cut-off distance (in gray levels). The higher h, the more permissive one is in accepting patches. A higher h results in a smoother image, at the expense of blurring features. For a Gaussian noise of standard deviation sigma, a rule of thumb is to choose the value of h to be sigma of slightly less.

multichannel : bool, optional

Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension. Set to False for 3-D images.

fast_mode : bool, optional

If True (default value), a fast version of the non-local means algorithm is used. If False, the original version of non-local means is used. See the Notes section for more details about the algorithms.

sigma : float, optional

The standard deviation of the (Gaussian) noise. If provided, a more robust computation of patch weights is computed that takes the expected noise variance into account (see Notes below).

Returns:
result : ndarray

Denoised image, of same shape as image.

Notes

The non-local means algorithm is well suited for denoising images with specific textures. The principle of the algorithm is to average the value of a given pixel with values of other pixels in a limited neighbourhood, provided that the patches centered on the other pixels are similar enough to the patch centered on the pixel of interest.

In the original version of the algorithm [1], corresponding to fast=False, the computational complexity is:

image.size * patch_size ** image.ndim * patch_distance ** image.ndim

Hence, changing the size of patches or their maximal distance has a strong effect on computing times, especially for 3-D images.

However, the default behavior corresponds to fast_mode=True, for which another version of non-local means [2] is used, corresponding to a complexity of:

image.size * patch_distance ** image.ndim

The computing time depends only weakly on the patch size, thanks to the computation of the integral of patches distances for a given shift, that reduces the number of operations [1]. Therefore, this algorithm executes faster than the classic algorith (fast_mode=False), at the expense of using twice as much memory. This implementation has been proven to be more efficient compared to other alternatives, see e.g. [3].

Compared to the classic algorithm, all pixels of a patch contribute to the distance to another patch with the same weight, no matter their distance to the center of the patch. This coarser computation of the distance can result in a slightly poorer denoising performance. Moreover, for small images (images with a linear size that is only a few times the patch size), the classic algorithm can be faster due to boundary effects.

The image is padded using the reflect mode of skimage.util.pad before denoising.

If the noise standard deviation, sigma, is provided a more robust computation of patch weights is used. Subtracting the known noise variance from the computed patch distances improves the estimates of patch similarity, giving a moderate improvement to denoising performance [4]. It was also mentioned as an option for the fast variant of the algorithm in [3].

When sigma is provided, a smaller h should typically be used to avoid oversmoothing. The optimal value for h depends on the image content and noise level, but a reasonable starting point is h = 0.8 * sigma when fast_mode is True, or h = 0.6 * sigma when fast_mode is False.

References

[1] (1, 2, 3) A. Buades, B. Coll, & J-M. Morel. A non-local algorithm for image denoising. In CVPR 2005, Vol. 2, pp. 60-65, IEEE. DOI: 10.1109/CVPR.2005.38
[2] (1, 2) J. Darbon, A. Cunha, T.F. Chan, S. Osher, and G.J. Jensen, Fast nonlocal filtering applied to electron cryomicroscopy, in 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008, pp. 1331-1334. DOI: 10.1109/ISBI.2008.4541250
[3] (1, 2, 3) Jacques Froment. Parameter-Free Fast Pixelwise Non-Local Means Denoising. Image Processing On Line, 2014, vol. 4, pp. 300-326. DOI: 10.5201/ipol.2014.120
[4] (1, 2) A. Buades, B. Coll, & J-M. Morel. Non-Local Means Denoising. Image Processing On Line, 2011, vol. 1, pp. 208-212. DOI: 10.5201/ipol.2011.bcm_nlm

Examples

>>> a = np.zeros((40, 40))
>>> a[10:-10, 10:-10] = 1.
>>> a += 0.3 * np.random.randn(*a.shape)
>>> denoised_a = denoise_nl_means(a, 7, 5, 0.1)

estimate_sigma

skimage.restoration.estimate_sigma(image, average_sigmas=False, multichannel=False) [source]

Robust wavelet-based estimator of the (Gaussian) noise standard deviation.

Parameters:
image : ndarray

Image for which to estimate the noise standard deviation.

average_sigmas : bool, optional

If true, average the channel estimates of sigma. Otherwise return a list of sigmas corresponding to each channel.

multichannel : bool

Estimate sigma separately for each channel.

Returns:
sigma : float or list

Estimated noise standard deviation(s). If multichannel is True and average_sigmas is False, a separate noise estimate for each channel is returned. Otherwise, the average of the individual channel estimates is returned.

Notes

This function assumes the noise follows a Gaussian distribution. The estimation algorithm is based on the median absolute deviation of the wavelet detail coefficients as described in section 4.2 of [1].

References

[1] (1, 2) D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425-455. DOI:10.1093/biomet/81.3.425

Examples

>>> import skimage.data
>>> from skimage import img_as_float
>>> img = img_as_float(skimage.data.camera())
>>> sigma = 0.1
>>> img = img + sigma * np.random.standard_normal(img.shape)
>>> sigma_hat = estimate_sigma(img, multichannel=False)

inpaint_biharmonic

skimage.restoration.inpaint_biharmonic(image, mask, multichannel=False) [source]

Inpaint masked points in image with biharmonic equations.

Parameters:
image : (M[, N[, …, P]][, C]) ndarray

Input image.

mask : (M[, N[, …, P]]) ndarray

Array of pixels to be inpainted. Have to be the same shape as one of the ‘image’ channels. Unknown pixels have to be represented with 1, known pixels - with 0.

multichannel : boolean, optional

If True, the last image dimension is considered as a color channel, otherwise as spatial.

Returns:
out : (M[, N[, …, P]][, C]) ndarray

Input image with masked pixels inpainted.

References

[1] N.S.Hoang, S.B.Damelin, “On surface completion and image inpainting by biharmonic functions: numerical aspects”, https://arxiv.org/abs/1707.06567
[2] C. K. Chui and H. N. Mhaskar, MRA Contextual-Recovery Extension of Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal., 28 (2010), 104-113, DOI: 10.1016/j.acha.2009.04.004

Examples

>>> img = np.tile(np.square(np.linspace(0, 1, 5)), (5, 1))
>>> mask = np.zeros_like(img)
>>> mask[2, 2:] = 1
>>> mask[1, 3:] = 1
>>> mask[0, 4:] = 1
>>> out = inpaint_biharmonic(img, mask)

cycle_spin

skimage.restoration.cycle_spin(x, func, max_shifts, shift_steps=1, num_workers=None, multichannel=False, func_kw={}) [source]

Cycle spinning (repeatedly apply func to shifted versions of x).

Parameters:
x : array-like

Data for input to func.

func : function

A function to apply to circularly shifted versions of x. Should take x as its first argument. Any additional arguments can be supplied via func_kw.

max_shifts : int or tuple

If an integer, shifts in range(0, max_shifts+1) will be used along each axis of x. If a tuple, range(0, max_shifts[i]+1) will be along axis i.

shift_steps : int or tuple, optional

The step size for the shifts applied along axis, i, are:: range((0, max_shifts[i]+1, shift_steps[i])). If an integer is provided, the same step size is used for all axes.

num_workers : int or None, optional

The number of parallel threads to use during cycle spinning. If set to None, the full set of available cores are used.

multichannel : bool, optional

Whether to treat the final axis as channels (no cycle shifts are performed over the channels axis).

func_kw : dict, optional

Additional keyword arguments to supply to func.

Returns:
avg_y : np.ndarray

The output of func(x, **func_kw) averaged over all combinations of the specified axis shifts.

Notes

Cycle spinning was proposed as a way to approach shift-invariance via performing several circular shifts of a shift-variant transform [1].

For a n-level discrete wavelet transforms, one may wish to perform all shifts up to max_shifts = 2**n - 1. In practice, much of the benefit can often be realized with only a small number of shifts per axis.

For transforms such as the blockwise discrete cosine transform, one may wish to evaluate shifts up to the block size used by the transform.

References

[1] (1, 2) R.R. Coifman and D.L. Donoho. “Translation-Invariant De-Noising”. Wavelets and Statistics, Lecture Notes in Statistics, vol.103. Springer, New York, 1995, pp.125-150. DOI:10.1007/978-1-4612-2544-7_9

Examples

>>> import skimage.data
>>> from skimage import img_as_float
>>> from skimage.restoration import denoise_wavelet, cycle_spin
>>> img = img_as_float(skimage.data.camera())
>>> sigma = 0.1
>>> img = img + sigma * np.random.standard_normal(img.shape)
>>> denoised = cycle_spin(img, func=denoise_wavelet, max_shifts=3)

© 2011 the scikit-image team
Licensed under the BSD 3-clause License.
http://scikit-image.org/docs/0.14.x/api/skimage.restoration.html