且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Keras LSTM层实现背后的架构是什么?

更新时间:2023-12-03 10:56:04

我是对的!该体系结构是10个神经元,每个神经元代表一个时间步长.每个神经元都被喂入一个64个长度的向量,该向量表示64个特征(input_dim).

I was correct! The architecture is 10 neurons, each representing a time-step. Each neuron is being fed a 64 length vector, representing 64 features (the input_dim).

32代表隐藏状态数或隐藏单位长度".它表示存在多少隐藏状态,还表示输出尺寸(因为我们在LSTM的末尾输出了隐藏状态).

The 32 represents the number of hidden states or the "hidden unit length". It represents how many hidden states there are and also represents the output dimension (since we output the hidden state at the end of the LSTM).

最后,来自最后一个时间步的32维输出向量随后被馈送到2个神经元的密集层,这基本上意味着将32个长度的向量插入两个神经元.

Lastly, the 32-dimensional output vector from the last time-step is then fed to a Dense layer of 2 neurons, which basically means plug the 32 length vector to both neurons.

更多阅读以及一些有用的答案:

More reading with somewhat helpful answers: