且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Keras:多输出和自定义损失函数的清晰实现?

更新时间:2023-12-02 15:18:28

在TensorFlow中,我仅定义两个Tensors logits = x和output = sigmoid(x),以便能够在任何自定义损失函数和输出中使用logits进行绘图或其他应用.

In TensorFlow I would simply define two Tensors logits=x and output=sigmoid(x) to be able to use logits in any custom loss function and output for plotting or other applications.

在Keras中,您执行的操作完全相同:

In Keras you do exactly the same:

x = Conv2D(1, (3, 3), activation="linear")(x)
dec_out = resize_images(x, (28, 28))  # Output tensor used in training, for the loss function

training_model = Model(dec_in, dec_out, name="decoder")

...

sigmoid = Activation('sigmoid')(dec_out)
inference_model = Model(dec_in, sigmoid)

training_model.fit(x=X_train, y=X_train, batch_size=256, epochs=10)

prediction = inference_model.predict(some_input)

在Keras世界中,如果只有一个输出张量,您的生活将变得更加轻松.然后,您可以使用标准的Keras功能.对于两个输出/损耗,一种可能的解决方法是在输出之前将它们连接起来,然后在损耗函数中再次分割.一个很好的例子是SSD实现,它具有分类和本地化的损失:

In Keras world your life becomes much easier if you got a single output tensor. Then you can get standard Keras features working for it. For two outputs/losses one possible workaround can be to concatenate them before output and then split again in the loss function. A good example here can be SSD implementation, which has classification and localization losses: https://github.com/pierluigiferrari/ssd_keras/blob/master/keras_loss_function/keras_ssd_loss.py#L133

总的来说,我不理解那些抱怨.可以理解,一个新的框架起初会造成挫败感,但是Keras很棒,因为当您需要标准的东西时它很简单,而当您需要超越标准的东西时它就很灵活.在 Keras模型动物园中,复杂模型的实现数量就是一个很好的证明.通过阅读该代码,您可以学习在Keras中构建模型的各种模式.

In general, I do not understand those complains. It can be understood that a new framework causes frustration at first, but Keras is great because it can be simple when you need standard stuff and flexible when you need to go beyond. Number of complex models' implementations in Keras model zoo is a good justification for that. By reading that code you can learn various patterns for constructing models in Keras.