且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何确定seq2seq张量流RNN训练模型的最大批处理大小

更新时间:2023-12-02 20:08:04

默认情况下,Tensorflow占用所有可用的GPU内存.但是,有一种方法可以改变这一点.在我的模型中,我这样做:

By default, Tensorflow occupies all GPU memory available. However, there is a way to change this. In my model, I do this:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True

然后,您可以在开始会话时使用此配置:

Then you can use this config when you start your session:

with tf.Session(config=config) as sess:

现在,模型将仅使用所需的内存,然后您可以尝试使用不同的批处理大小,并查看何时内存用完.

Now, the model will only use as much memory as it needs, and then you can try with different batch sizes and see when it runs out of memory.