且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

可视化张量流中卷积层的输出

更新时间:2023-10-18 12:47:16

我不知道帮助程序功能,但如果要查看所有过滤器,可以将它们打包成一张图像,并使用 tf.transpose



因此,如果您的张量是图像 x ix x iy x 个频道

 >>> V = tf.Variable()
>>打印V.get_shape()

TensorShape([Dimension(-1),Dimension(256),Dimension(256),Dimension(32)])

因此在此示例中, ix = 256 iy = 256 channels = 32



先分割一张图片,然后删除图片尺寸

  V = tf.slice(V, (0,0,0,0),(1,-1,-1,-1))#V [0,...] 
V = tf.reshape(V,(iy,ix,channels ))

接下来在图像周围添加几个零填充像素

  ix + = 4 
iy + = 4
V = tf.image.resize_image_with_crop_or_pad(image, iy,ix)

然后重塑形状,以使您拥有32个4x8通道而不是32个通道,我们称它们为 cy = 4 cx = 8

  V = tf.reshape(V,(iy,ix,cy,cx))

现在棘手的部分。 tf 似乎以numpy的C顺序返回结果。



当前订单(如果展平)将列出第一个像素的所有通道(在 cx cy ),然后列出第二个像素的通道(递增 ix )。在递增到下一行( iy )之前,先经过像素行( ix )。



我们想要将图像布置在网格中的顺序。
因此,您需要先浏览一排图像( ix ),然后再沿着通道排( cx ),当您点击通道行的末尾时,您将跳至图像的下一行( iy ),并且用尽了或图像中的行递增到下一行通道( cy )。因此:

  V = tf.transpose(V,(2,0,3,1) )#cy,iy,cx,ix 

我个人更喜欢 np.einsum 用于花式换位,以提高可读性,但不在 tf


I'm trying to visualize the output of a convolutional layer in tensorflow using the function tf.image_summary. I'm already using it successfully in other instances (e. g. visualizing the input image), but have some difficulties reshaping the output here correctly. I have the following conv layer:

img_size = 256
x_image = tf.reshape(x, [-1,img_size, img_size,1], "sketch_image")

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

So the output of h_conv1 would have the shape [-1, img_size, img_size, 32]. Just using tf.image_summary("first_conv", tf.reshape(h_conv1, [-1, img_size, img_size, 1])) Doesn't account for the 32 different kernels, so I'm basically slicing through different feature maps here.

How can I reshape them correctly? Or is there another helper function I could use for including this output in the summary?

I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.

So if you have a tensor that's images x ix x iy x channels

>>> V = tf.Variable()
>>> print V.get_shape()

TensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])

So in this example ix = 256, iy=256, channels=32

first slice off 1 image, and remove the image dimension

V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]
V = tf.reshape(V,(iy,ix,channels))

Next add a couple of pixels of zero padding around the image

ix += 4
iy += 4
V = tf.image.resize_image_with_crop_or_pad(image, iy, ix)

Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.

V = tf.reshape(V,(iy,ix,cy,cx)) 

Now the tricky part. tf seems to return results in C-order, numpy's default.

The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).

We want the order that would lay out the images in a grid. So you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:

V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix

Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.

newtensor = np.einsum('yxYX->YyXx',oldtensor)

anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:

# image_summary needs 4d input
V = tf.reshape(V,(1,cy*iy,cx*ix,1))

try tf.image_summary on that, you should get a grid of little images.

Below is an image of what one gets after following all the steps here.