且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

求解泊松方程FFT域与有限差分

更新时间:2023-02-27 09:24:13

有两个主要区别

对于有限差分,您正在计算离散差分,对于FFT解决方案,您仅在连续空间上计算有毒算子并将其应用于方程式.要以完全相同的方式计算有限差分,您需要在离散域中使用而不是计算fft,您可以做的是要记住 fft(roll(x,1))= exp(-2j *np.pi * np.fftfreq(N))* fft(x)其中roll表示按样品的圆周位移.

另外一点是,您正在使用边界条件(墙壁上的零电势),快速而肮脏的解决方案是使用正弦变换,其翻译公式稍微复杂一些,但是可以通过不定义空间来计算,因为势能在边界上被定义为零(因为对于任何整数,sin(pi * n)= 0n)

频域中的解是一个直接解,您可以使用一个封闭的公式来计算每个系数,然后执行傅立叶逆变换,而无需进行迭代.只要您以足够的精度计算出差异,精度也往往会很好.

如果您真的对此感到担心,则应关注诸如(1-exp(2j * pi/N))之类的差异,因为第二项接近1,即有效位数将减少.但是您可以通过将其表示为 exp(1j * pi/N)*(exp(-1j * pi/N)-exp(1j * pi/N))= exp(1j *pi/N)*(-2j * sin(pi/N)),如果您有产品,就不会遗漏任何重要内容.如果要以单精度或半精度进行计算,所有这些都将更为重要(使用 numpy.float64 numpy.complex128 可能不会注意到任何舍入错误)./p>

如果您在频域中进行计算,但对准确性不满意,则可以随时精炼".加上有限差分方程的一些迭代.

I have two solutions of the current equation:

The first one is using Finite difference scheme ( code below ):

# Some variable declarations
nx = 300
ny = 300
nt  = 100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.

dx = (xmax - xmin) / (nx - 1)
dy = (ymax - ymin) / (ny - 1)

# Initialization
p  = np.zeros((nx, ny))
pd = np.zeros((nx, ny))
b  = np.zeros((nx, ny))

# Source
b[int(nx / 4), int(ny / 4)]  = 100
b[int(3 * nx / 4), int(3 * ny / 4)] = -100

for it in range(nt):
    pd = p.copy()
    p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
                    (pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
                    b[1:-1, 1:-1] * dx**2 * dy**2) / 
                    (2 * (dx**2 + dy**2)))

    p[0, :] = 0
    p[nx-1, :] = 0
    p[:, 0] = 0
    p[:, ny-1] = 0

Using FFT I have the following code:

def poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy):
    
    p  = np.zeros((nptx,npty))
    
    ppad  = np.zeros((nptx+nboundaryx,npty+nboundaryy))
    
    phatpad  = np.zeros((nptx+nboundaryx,npty+nboundaryy))
   
    bpad    = np.zeros((nptx+nboundaryx,npty+nboundaryy))
        
      
     
    bpad[:nptx,:npty] = b

    kxpad = 2*np.pi*np.fft.fftfreq(nptx+nboundaryx,d=dx)
   
    kypad = 2*np.pi*np.fft.fftfreq(npty+nboundaryy,d=dy)
       
    epsilon = 1.e-9
       
    ppad = np.real(np.fft.ifft2(-np.fft.fft2(bpad)/np.maximum(kxpad[None, :]**2 + kypad[:, None]**2,epsilon)))
    
    p = ppad[:nptx,:npty]
    
    p[0,:]      = 0
    p[nptx-1,:] = 0
    p[:,0]      = 0
    p[:,npty-1] = 0
    
  
    return p

nptx = 300
npty = 300
b  = np.zeros((nptx, npty))
b[int(nptx / 4), int(npty / 4)]  = 100
b[int(3 * nptx / 4), int(3 * npty / 4)] = -100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
nboundaryx = 0
nboundaryy = 0
dx = (xmax - xmin) / (nptx+nboundaryx - 1)
dy = (ymax - ymin) / (npty+nboundaryy - 1)

print(dx)
p = poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy)

The results are:

  1. First image using Finite Difference

  2. Second image using FFT

I know using FD scheme is correct but not sure if I did in FFT correctly. I see a round shape on FFT, is this correct?

There are two main differences

For the finite differences you are calculating the discrete differences, for the FFT solution you are simply computing the poison operator on the continuous space and applying that to your equation. To compute the finite differences exactly the same way you would need to use the in the discrete domain instead of calculating the fft what you can do is to remember that fft(roll(x, 1)) = exp(-2j * np.pi * np.fftfreq(N))* fft(x) where roll denotes the circular shift by oen sample.

Other point is that you are using boundary conditions (zero potential on the walls) the quick and dirty solution is to use method of image charges to ensure the potential vanishes on the walls and compute the poison equation on the augmented space. If you care about memory usage or solution purity you could use the sine transform that has slightly more complicated translation formulas, but can be computed without augmenting the space since the potential is forced to be zero on the boundaries by its definition (because sin(pi * n) = 0 for any integer n)

The solution in the frequency domain is a direct solution, you calculate each coefficient with a closed formula and then perform the inverse Fourier transform, no iteration is required. The accuracy tends to be good as well, as far as you compute the differences with enough accuracy.

If you are really worried about this, you should focus in differences like (1 - exp(2j*pi/N)) because the second term is close to 1, the number of significative bits will be reduced. But you can improve the accuracy of such expressions by factoring it as exp(1j*pi/N) * (exp(-1j*pi/N) - exp(1j*pi/N)) = exp(1j*pi/N) * (-2j * sin(pi/N)) where you have a product and you don't loose any significant bit. All of this is more important if you are compluting it in single or half precision (you probably will not notice any rounding error using numpy.float64 or numpy.complex128).

If you calculate in the frequency domain and you are not happy with the accuracy you can always "refine" it with some iterations of your finite differences equation.