且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何使用FFT从图像中去除重复图案

更新时间:2022-10-19 13:13:03

以下是一种简单有效的线性滤波策略,用于消除水平线伪像:

概述:

  1. 通过在垂直方向上查找图像功率谱中的峰值来估计失真的频率.函数

    这是 remove_lines 的过滤输出:

    在皮肤图像上, estimate_distortion_freq 估计失真的频率为0.08333个周期/像素(周期为12.0像素).来自 remove_lines 的已过滤输出:

    在两个示例中,失真大部分都已消除.这不是完美的:在人像图像上,在顶部和底部边框附近仍然可以看到一些波纹,这是使用大型滤镜或傅立叶方法时的典型缺陷.尽管如此,它还是对原始图像的一个很好的改进.

    • I have image of skin colour with repetitive pattern (Horizontal White Lines) generated by a scanner that uses a line of sensors to perceive the photo.

    • My Question is how to denoise the image effectively using FFT without affecting the quality of the image much, somebody told me that I have to suppress the lines that appears in the magnitude spectrum manually, but I didn't know how to do that, can you please tell me how to do it?

    • My approach is to use Fast Fourier Transform(FFT) to denoise the image channel by channel.

    • I have tried HPF, and LPF in Fourier domain, but the results were not good as you can see:

    My Code:

    from skimage.io import imread, imsave
    from matplotlib import pyplot as plt
    import numpy as np
    
    img = imread('skin.jpg')
    
    R = img[...,2]
    G = img[...,1]
    B = img[...,0]
    
    f1 = np.fft.fft2(R)
    fshift1 = np.fft.fftshift(f1)
    phase_spectrumR = np.angle(fshift1)
    magnitude_spectrumR = 20*np.log(np.abs(fshift1))
    
    f2 = np.fft.fft2(G)
    fshift2 = np.fft.fftshift(f2)
    phase_spectrumG = np.angle(fshift2)
    magnitude_spectrumG = 20*np.log(np.abs(fshift2))
    
    f3 = np.fft.fft2(B)
    fshift3 = np.fft.fftshift(f3)
    phase_spectrumB = np.angle(fshift3)
    magnitude_spectrumB = 20*np.log(np.abs(fshift2))
    
    #===============================
    # LPF # HPF
    magR = np.zeros_like(R) #  = fshift1 # 
    magR[magR.shape[0]//4:3*magR.shape[0]//4,
     magR.shape[1]//4:3*magR.shape[1]//4] = np.abs(fshift1[magR.shape[0]//4:3*magR.shape[0]//4,
      magR.shape[1]//4:3*magR.shape[1]//4]) # =0 #
    resR = np.abs(np.fft.ifft2(np.fft.ifftshift(magR)))
    resR = R - resR
    #===============================
    magnitude_spectrumR
    plt.subplot(221)
    plt.imshow(R, cmap='gray')
    plt.title('Original')
    
    plt.subplot(222)
    plt.imshow(magnitude_spectrumR, cmap='gray')
    plt.title('Magnitude Spectrum')
    
    plt.subplot(223)
    plt.imshow(phase_spectrumR, cmap='gray')
    plt.title('Phase Spectrum')
    
    plt.subplot(224)
    plt.imshow(resR, cmap='gray')
    plt.title('Processed')
    
    plt.show()
    

    Here is a simple and effective linear filtering strategy to remove the horizontal line artifact:

    Outline:

    1. Estimate the frequency of the distortion by looking for a peak in the image's power spectrum in the vertical dimension. The function scipy.signal.welch is useful for this.

    2. Design two filters: a highpass filter with cutoff just below the distortion frequency and a lowpass filter with cutoff near DC. We'll apply the highpass filter vertically and the lowpass filter horizontally to try to isolate the distortion. We'll use scipy.signal.firwin to design these filters, though there are many ways this could be done.

    3. Compute the restored image as "image − (hpf ⊗ lpf) ∗ image".

    Code:

    # Copyright 2021 Google LLC.
    # SPDX-License-Identifier: Apache-2.0
    
    import numpy as np
    from scipy.ndimage import convolve1d
    from scipy.signal import firwin, welch
    
    def remove_lines(image, distortion_freq=None, num_taps=65, eps=0.025):
      """Removes horizontal line artifacts from scanned image.
      Args:
        image: 2D or 3D array.
        distortion_freq: Float, distortion frequency in cycles/pixel, or
          `None` to estimate from spectrum.
        num_taps: Integer, number of filter taps to use in each dimension.
        eps: Small positive param to adjust filters cutoffs (cycles/pixel).
      Returns:
        Denoised image.
      """
      image = np.asarray(image, float)
      if distortion_freq is None:
        distortion_freq = estimate_distortion_freq(image)
    
      hpf = firwin(num_taps, distortion_freq - eps,
                   pass_zero='highpass', fs=1)
      lpf = firwin(num_taps, eps, pass_zero='lowpass', fs=1)
      return image - convolve1d(convolve1d(image, hpf, axis=0), lpf, axis=1)
    
    def estimate_distortion_freq(image, min_frequency=1/25):
      """Estimates distortion frequency as spectral peak in vertical dim."""
      f, pxx = welch(np.reshape(image, (len(image), -1), 'C').sum(axis=1))
      pxx[f < min_frequency] = 0.0
      return f[pxx.argmax()]
    

    Examples:

    On the portrait image, estimate_distortion_freq estimates that the frequency of the distortion is 0.1094 cycles/pixel (period of 9.14 pixels). The transfer function of the filtering "image − (hpf ⊗ lpf) ∗ image" looks like this:

    Here is the filtered output from remove_lines:

    On the skin image, estimate_distortion_freq estimates that the frequency of the distortion is 0.08333 cycles/pixel (period of 12.0 pixels). Filtered output from remove_lines:

    The distortion is mostly removed on both examples. It isn't perfect: on the portrait image, a couple ripples are still visible near the top and bottom borders, a typical defect when using large filters or Fourier methods. Still, it's a good improvement over the original images.