且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

联合损失函数的多输出Keras模型训练

更新时间:2023-12-02 14:30:28

您的代码中有两个问题:

You had two issues in your code:

首先是Lambda内部的K.dot操作需要为K.batch_dot

The first is that the K.dot operation inside the Lambda needed to be K.batch_dot

我用过:

def output_mult(x):
    a = K.permute_dimensions(x, (0, 2, 1, 3))
    b = K.permute_dimensions(x, (0, 2, 3, 1))
    return K.batch_dot(a, b)


out2 = Lambda(output_mult)(out2)

这实际上有助于Keras计算输出尺寸.这是检查代码的简便方法.为了对其进行调试,我首先将自定义损失替换为存在损失(mse),并且很容易检测到.

It helps to actually let Keras compute the output dimensions. It is an easy way to check the code. In order to debug it, I first replaced the custom loss with an exists loss (mse) and this was easy to detect.

第二个问题是自定义损失函数采用一对目标/输出而不是列表.损失函数的参数不是您在初始和编辑中都假定的张量列表.所以我将损失函数定义为

Second issue is that a custom loss function takes a single pair of target / output rather than a list. The arguments to a loss function are not a list of tensors as you assumed both initially and in your edit. So I defined your loss function as

def custom_loss(model, output_1):
    """ This loss function is called for output2
        It needs to fetch model.output[0] and the output_1 predictions in
        order to calculate fcn_loss_1
    """
    def my_loss(y_true, y_pred):
        fcn_loss_1 = tf.nn.softmax_cross_entropy_with_logits(labels=model.targets[0], logits=output_1)
        fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
        fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)
        return tf.reduce_mean(fcn_loss_2)

    return my_loss

并用作

output_layer_1 = [layer for layer in model.layers if layer.name == 'output_1'][0]
losses = {'output_1': 'categorical_crossentropy', 'output_2': custom_loss(model, output_layer_1.output)}
model.compile(loss=losses, optimizer='adam', loss_weights=[1.0, 2.0])


我最初误读了output2的自定义损失,因为它要求fcn_loss_1的值,但情况似乎并非如此,您可以将其写为:


I initially misread the custom loss for output2 as requiring the value of fcn_loss_1, this doesn't seem to be the case and you can just write this as:

def custom_loss():
    def my_loss(y_true, y_pred):
        fcn_loss_2 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
        fcn_loss_2 = tf.matrix_band_part(fcn_loss_2, 0, -1) - tf.matrix_band_part(fcn_loss_2, 0, 0)
        return tf.reduce_mean(fcn_loss_2)

    return my_loss

并将其用作:

losses = {'output_1': 'categorical_crossentropy', 'output_2': custom_loss()}
model.compile(loss=losses, optimizer='adam', loss_weights=[1.0, 2.0])

我假设output_1的损失为categorical_crossentropy.但是,即使您需要更改它,最简单的方法是具有2个独立的损失函数.当然,您也可以选择定义一个损失函数,该函数返回0并返回全部成本...但是将'loss(output1)+ 2 * loss(output2)'分为两个损失加上重量,恕我直言.

I'm making the assumption that the loss for output_1 is categorical_crossentropy. But even if you need to change it, the simplest way to do it is to have 2 independent loss functions. Of course you can also choose to define a loss function that returns 0 and one that returns the full cost... but it would be cleaner to split the 'loss(output1) + 2 * loss(output2)' in two loss plus the weights, imho.

完整笔记本: https://colab.research.google.com/drive/1NG3uIiesg-VIt- W9254Sea2XXUYPoVH5