且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Keras LSTM 输入维度设置

更新时间:2023-12-02 08:45:10

为了完整起见,这里是发生的事情.

For the sake of completeness, here's what's happened.

首先,LSTM 和 Keras 中的所有层一样,接受两个参数:input_shapebatch_input_shape.区别在于input_shape 不包含批量大小,而batch_input_shape完整的输入形状,包括批量大小.

First up, LSTM, like all layers in Keras, accepts two arguments: input_shape and batch_input_shape. The difference is in convention that input_shape does not contain the batch size, while batch_input_shape is the full input shape including the batch size.

因此,规范 input_shape=(None, 20, 64) 告诉 keras 期望一个 4 维输入,这不是您想要的.正确的应该是 (20,).

Hence, the specification input_shape=(None, 20, 64) tells keras to expect a 4-dimensional input, which is not what you want. The correct would have been just (20,).

但这还不是全部.LSTM 层是一个循环层,因此它需要一个 3 维输入 (batch_size, timesteps, input_dim).这就是为什么正确的规范是 input_shape=(20, 1)batch_input_shape=(10000, 20, 1).另外,你的训练数组也应该重新整形,以表示它有 20 个时间步长和每个步长 1 个输入特征.

But that's not all. LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim). That's why the correct specification is input_shape=(20, 1) or batch_input_shape=(10000, 20, 1). Plus, your training array should also be reshaped to denote that it has 20 time steps and 1 input feature per each step.

因此,解决方案:

X_train = np.expand_dims(X_train, 2)  # makes it (10000,20,1)
...
model = Sequential()
model.add(LSTM(..., input_shape=(20, 1)))