且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何在XGBoost中释放GPU上的所有内存?

更新时间:2023-11-18 12:39:10

好吧,我认为您无法通过某种方式访问​​已加载的Dmatrix,因为fit函数不会返回它. 您可以在此github链接上的此处查看源代码:

Well, I don't think there is a way that you can have access to the loaded Dmatrix cause the fit function doesn't return it. you can check the source code here on this github link:

所以我认为***的方法是将其包装在Process中并以这种方式运行,如下所示:

So I think the best way is to wrap it in a Process and run it that way, like this:

from multiprocessing import Process

def fitting(args):
    clf = xgb.XGBClassifier(tree_method = 'gpu_hist',gpu_id = 0,n_gpus = 4, random_state = 55,n_jobs = -1)
    clf.set_params(**params)
    clf.fit(X_train, y_train, **fit_params)

    #save the model here on the disk

fitting_process = Process(target=fitting, args=(args))
fitting process.start()
fitting_process.join()

# load the model from the disk here