且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

PyTorch 如何计算二阶雅可比行列式?

更新时间:2021-07-25 16:12:18

所以正如@jodag 在他的评论中提到的,ReLU 为空或线性,它的梯度是恒定的(除了在 0,这是一个罕见的事件),所以它的二阶导数为零.我将激活函数更改为 Tanh,这最终允许我计算两次雅可比.

So as @jodag mentioned in his comment, ReLU being null or linear, its gradient is constant (except on 0, which is a rare event), so its second-order derivative is zero. I changed the activation function to Tanh, which finally allows me to compute the jacobian twice.

最终代码是

import torch
import torch.nn as nn

class PINN(torch.nn.Module):
    
    def __init__(self, layers:list):
        super(PINN, self).__init__()
        self.linears = nn.ModuleList([])
        for i, dim in enumerate(layers[:-2]):
            self.linears.append(nn.Linear(dim, layers[i+1]))
            self.linears.append(nn.Tanh())
        self.linears.append(nn.Linear(layers[-2], layers[-1]))
        
    def forward(self, x):
        for layer in self.linears:
            x = layer(x)
        return x
        
    def compute_u_x(self, x):
        self.u_x = torch.autograd.functional.jacobian(self, x, create_graph=True)
        self.u_x = torch.squeeze(self.u_x)
        return self.u_x
    
    def compute_u_xx(self, x):
        self.u_xx = torch.autograd.functional.jacobian(self.compute_u_x, x)
        self.u_xx = torch.squeeze(self.u_xx)
        return self.u_xx

然后在 PINN 的实例上调用 compute_u_xx(x) 并将 x.require_grad 设置为 True 得到我在那里.如何摆脱 torch.autograd.functional.jacobian 引入的无用维度仍有待理解...

Then calling compute_u_xx(x) on an instance of PINN with x.require_grad set to True gets me there. How to get rid of useless dimensions introduced by torch.autograd.functional.jacobian remains to be understood though...