且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

keras理解词嵌入层

更新时间:2023-12-01 22:11:22

1-是的,不能保证单词的唯一性,请参见

1 - Yes, word unicity is not guaranteed, see the docs:

  • 来自one_hot:这是hashing_trick函数的包装...
  • 来自hashing_trick:由于散列函数可能发生的冲突,可能会将两个或多个单词分配给同一索引.散列空间和不同对象的数量."
  • From one_hot: This is a wrapper to the hashing_trick function...
  • From hashing_trick: "Two or more words may be assigned to the same index, due to possible collisions by the hashing function. The probability of a collision is in relation to the dimension of the hashing space and the number of distinct objects."

为此***使用Tokenizer. (请参阅问题4)

It would be better to use a Tokenizer for this. (See question 4)

切记非常重要,在创建索引时应同时包含所有所有单词.您不能使用函数来创建包含2个单词的词典,然后再创建2个单词,再创建一个词典.这将创建非常错误的词典.

It's very important to remember that you should involve all words at once when creating indices. You cannot use a function to create a dictionary with 2 words, then again with 2 words, then again.... This will create very wrong dictionaries.

2-嵌入的大小为50 x 8,因为它是在嵌入层中定义的:

2 - Embeddings have the size 50 x 8, because that was defined in the embedding layer:

Embedding(vocab_size, 8, input_length=max_length)

  • vocab_size = 50-这意味着词典中有50个单词
  • embedding_size= 8-这是嵌入的真实大小:每个单词由8个数字的向量表示.
    • vocab_size = 50 - this means there are 50 words in the dictionary
    • embedding_size= 8 - this is the true size of the embedding: each word is represented by a vector of 8 numbers.
    • 3-您不知道.它们使用相同的嵌入.

      3 - You don't know. They use the same embedding.

      系统将使用相同的嵌入(索引为2的嵌入).这对于您的模型根本不健康.您应该使用另一种方法来创建问题1中的索引.

      The system will use the same embedding (the one for index = 2). This is not healthy for your model at all. You should use another method for creating indices in question 1.

      4-您可以手动创建单词词典,也可以使用Tokenizer类.

      4 - You can create a word dictionary manually, or use the Tokenizer class.

      手动:

      确保删除标点符号,并将所有单词都小写.

      Make sure you remove punctuation, make all words lower case.

      只需为您拥有的每个单词创建字典:

      Just create a dictionary for each word you have:

dictionary = dict()
current_key = 1

for doc in docs:
    for word in doc.split(' '):
        #make sure you remove punctuation (this might be boring)
        word = word.lower()

        if not (word in dictionary):
            dictionary[word] = current_key
            current_key += 1

令牌生成器:

from keras.preprocessing.text import Tokenizer

tokenizer = Tokenizer()

#this creates the dictionary
#IMPORTANT: MUST HAVE ALL DATA - including Test data
#IMPORTANT2: This method should be called only once!!!
tokenizer.fit_on_texts(docs)

#this transforms the texts in to sequences of indices
encoded_docs2 = tokenizer.texts_to_sequences(docs)

请参见encoded_docs2的输出:

[[6, 2], [3, 1], [7, 4], [8, 1], [9], [10], [5, 4], [11, 3], [5, 1], [12, 13, 2, 14]]

查看最大索引:

padded_docs2 = pad_sequences(encoded_docs2, maxlen=max_length, padding='post')
max_index = array(padded_docs2).reshape((-1,)).max()

因此,您的vocab_size应该为15(否则,您将有很多无用且无害的嵌入行).请注意,0未被用作索引.它将显示在填充中!!!

So, your vocab_size should be 15 (otherwise you'd have lots of useless - and harmless - embedding rows). Notice that 0 was not used as an index. It will appear in padding!!!

不要再次适合"令牌生成器!仅使用texts_to_sequences()或其他方法此处与拟合"无关.

Do not "fit" the tokenizer again! Only use texts_to_sequences() or other methods here that are not related to "fitting".

提示::有时在文本中包含end_of_sentence个单词可能会很有用.

Hint: it might be useful to include end_of_sentence words in your text sometimes.

提示2::保存Tokenizer以便以后使用是一个好主意(因为它具有用fit_on_texts创建的特定数据字典).

Hint2: it is a good idea to save your Tokenizer to be used later (since it has a specific dictoinary for your data, created with fit_on_texts).

#save:
text_to_save = tokenizer.to_json()

#load:
from keras.preprocessing.text import tokenizer_from_json
tokenizer = tokenizer_from_json(loaded_text)


5-嵌入参数正确.


5 - Params for embedding are correct.

密度:

Dense的参数始终基于上一层(在本例中为Flatten).

Params for Dense are always based on the preceding layer (the Flatten in this case).

公式为:previous_output * units + units

这将导致32 (from the Flatten) * 1 (Dense units) + 1 (Dense bias=units) = 33

展平:

将所有先前的尺寸乘以= 8 * 4.
Embedding输出lenght = 4embedding_size = 8.

It gets all the previous dimensions multiplied = 8 * 4.
The Embedding outputs lenght = 4 and embedding_size = 8.

6-Embedding层与数据以及预处理方式无关.

6 - The Embedding layer is not dependent of your data and how you preprocess it.

Embedding层的大小仅为50 x 8,因为您已经这样说过. (请参阅问题2)

The Embedding layer has simply the size 50 x 8 because you told so. (See question 2)

当然,还有更好的预处理数据的方法-请参阅问题4.

There are, of course, better ways of preprocessing the data - See question 4.

这将使您选择更好的vocab_size(即字典大小).

This will lead you to select better the vocab_size (which is dictionary size).

获取嵌入矩阵:

embeddings = model.layers[0].get_weights()[0]

选择任何单词索引:

embeding_for_word_7 = embeddings[7]

仅此而已.

如果您使用的是分词器,请使用以下命令获取单词index:

If you're using a tokenizer, get the word index with:

index = tokenizer.texts_to_sequences([['word']])[0][0]