更新时间:2021-07-25 16:12:36
因此,正如@jodag在他的评论中所述, ReLU
为空或线性,其梯度是恒定的( 0除外)
,这是一种罕见的事件),因此其二阶导数为零.我将激活功能更改为 Tanh
,最终使我可以计算两次jacobian.
So as @jodag mentioned in his comment, ReLU
being null or linear, its gradient is constant (except on 0
, which is a rare event), so its second-order derivative is zero. I changed the activation function to Tanh
, which finally allows me to compute the jacobian twice.
最终代码是
import torch
import torch.nn as nn
class PINN(torch.nn.Module):
def __init__(self, layers:list):
super(PINN, self).__init__()
self.linears = nn.ModuleList([])
for i, dim in enumerate(layers[:-2]):
self.linears.append(nn.Linear(dim, layers[i+1]))
self.linears.append(nn.Tanh())
self.linears.append(nn.Linear(layers[-2], layers[-1]))
def forward(self, x):
for layer in self.linears:
x = layer(x)
return x
def compute_u_x(self, x):
self.u_x = torch.autograd.functional.jacobian(self, x, create_graph=True)
self.u_x = torch.squeeze(self.u_x)
return self.u_x
def compute_u_xx(self, x):
self.u_xx = torch.autograd.functional.jacobian(self.compute_u_x, x)
self.u_xx = torch.squeeze(self.u_xx)
return self.u_xx
然后在 x.require_grad
设置为 True
的 PINN
实例上调用 compute_u_xx(x)
获得我在那里.尽管如何理解如何摆脱 torch.autograd.functional.jacobian
引入的无用尺寸,仍然需要理解...
Then calling compute_u_xx(x)
on an instance of PINN
with x.require_grad
set to True
gets me there. How to get rid of useless dimensions introduced by torch.autograd.functional.jacobian
remains to be understood though...