且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

估计服务于Keras模型所需的资源

更新时间:2023-12-02 19:37:52

我认为最现实的估计是运行模型并查看需要多少资源. tophtop将显示CPU和RAM的负载,但是在GPU内存的情况下要复杂一些,因为出于性能原因TensorFlow(Keras后端最受欢迎的选项)保留了所有可用内存.

I think the most realistic estimation would be to run the model and see how much resources does it take. top or htop will show you the CPU and RAM load, but in case of GPU memory it is a bit more complicated, since TensorFlow (most popular option for the Keras backend) reserves all the available memory for performance reasons.

您必须告诉TensorFlow不要占用所有可用内存,而是按需分配它. 这是在Keras中执行此操作的方法:

You have to tell TensorFlow not to take all available memory but allocate it on demand. Here is how to perform this in Keras:

import tensorflow as tf
import keras.backend as K
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction=0.2  # Initially allocate only 20% of memory
config.gpu_options.allow_growth = True  # dynamically grow the memory used on the GPU
config.log_device_placement = True  # to log device placement (on which device the operation ran)
                                    # (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
K.set_session(sess)  # set this TensorFlow session as the default session for Keras

https://github.com/keras-team/keras /issues/4161#issuecomment-366031228

然后,运行watch nvidia-smi并查看将占用多少内存.

Then, run watch nvidia-smi and see how much memory will be taken.