且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Tensorflow LSTM正则化

更新时间:2023-12-02 19:03:10

TL; DR;将所有参数保存在列表中,然后在进行优化梯度之前将其L ^ n范数添加到目标函数中

TL;DR; Save all the parameters in a list, and add their L^n norm to the objective function before making gradient for optimisation

1)在定义推断的函数中

1) In the function where you define the inference

net = [v for v in tf.trainable_variables()]
return *, net

2)在成本中添加L ^ n范数,并根据成本计算梯度

2) Add the L^n norm in the cost and calculate the gradient from the cost

weight_reg = tf.add_n([0.001 * tf.nn.l2_loss(var) for var in net]) #L2

cost = Your original objective w/o regulariser + weight_reg

param_gradients = tf.gradients(cost, net)

optimiser = tf.train.AdamOptimizer(0.001).apply_gradients(zip(param_gradients, net))

3)在需要时通过

_ = sess.run(optimiser, feed_dict={input_var: data})