更新时间:2023-12-01 23:50:22
您可以为此使用回调.
使用 Keras MNIST CNN示例(不是将整个代码复制到此处),并进行以下更改/添加:
Using the Keras MNIST CNN example (not copying the whole code here), with the following changes/additions:
from keras.callbacks import Callback
class TestCallback(Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_batch_end(self, batch, logs={}):
x, y = self.test_data
loss, acc = self.model.evaluate(x, y, verbose=0)
print('\nTesting loss: {}, acc: {}\n'.format(loss, acc))
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[TestCallback((x_test, y_test))]
)
用于评估每个批次末端的测试/验证集,我们得到以下信息:
for evaluating the test/validation set on each batch end, we get this:
Train on 60000 samples, validate on 10000 samples
Epoch 1/1
Testing loss: 0.0672039743446745, acc: 0.9781
128/60000 [..............................] - ETA: 7484s - loss: 0.1450 - acc: 0.9531
/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.416976). Check your callbacks.
% delta_t_median)
Testing loss: 0.06644540682602673, acc: 0.9781
256/60000 [..............................] - ETA: 7476s - loss: 0.1187 - acc: 0.9570
/var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:120: UserWarning: Method on_batch_end() is slow compared to the batch update (15.450395). Check your callbacks.
% delta_t_median)
Testing loss: 0.06575664376271889, acc: 0.9782
但是,正如您可能会自己看到的那样,这具有严重的缺点,即放慢代码(并适当地发出一些相关的警告).作为一种折衷,如果您可以在每次批处理结束时只获得 training 性能,则可以使用略有不同的回调:
However, as you will probably see for yourself, this has the severe drawback of slowing down the code significantly (and duly producing some relevant warnings). As a compromise, if you are OK with getting only the training performance at the end of each batch, you could use a slightly different callback:
class TestCallback2(Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_batch_end(self, batch, logs={}):
print() # just a dummy print command
现在的结果(替换model.fit()
中的callbacks=[TestCallback2((x_test, y_test))
)要快得多,但是每批结束时只给出训练指标:
The results now (replacing callbacks=[TestCallback2((x_test, y_test))
in model.fit()
) are much faster, but giving only the training metrics at the end of each batch:
Train on 60000 samples, validate on 10000 samples
Epoch 1/1
128/60000 [..............................] - ETA: 346s - loss: 0.8503 - acc: 0.7188
256/60000 [..............................] - ETA: 355s - loss: 0.8496 - acc: 0.7109
384/60000 [..............................] - ETA: 339s - loss: 0.7718 - acc: 0.7396
[...]
更新
以上所有方法都可以,但造成的损失&精度没有存储在任何地方,因此无法进行绘制;因此,这是另一个回调解决方案,它实际上将指标存储在训练集中:
All the above may be fine, but the resulting losses & accuracies are not stored anywhere, and hence they cannot be plotted; so, here is another callback solution that actually stores the metrics on the training set:
from keras.callbacks import Callback
class Histories(Callback):
def on_train_begin(self,logs={}):
self.losses = []
self.accuracies = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.accuracies.append(logs.get('acc'))
histories = Histories()
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[histories]
)
导致训练期间每批结束时的指标分别存储在histories.losses
和histories.accuracies
中-这是每项的前5个条目:
which results in the metrics at the end of each batch during training being stored in histories.losses
and histories.accuracies
, respectively - here are the first 5 entries of each:
histories.losses[:5]
# [2.3115866, 2.3008101, 2.2479887, 2.1895032, 2.1491694]
histories.accuracies[:5]
# [0.0703125, 0.1484375, 0.1875, 0.296875, 0.359375]