且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Keras 自定义数据生成器提供多输入和多输出的维度错误(功能 api 模型)

更新时间:2023-12-02 15:13:40

我在使用自定义生成器时遇到了类似的问题,它只需要传递一个大小为 10 的 numpy 数组作为输入和一个输出.

为了解决这个问题,我必须像这样转换传递给神经网络的 2 个向量的形状:

def slides_generator(integer_list):# 事情发生x = np_ts[np_index:np_index+10] # numpy 数组y = np_ts[np_index+10] # numpy 数组产量 tf.convert_to_tensor(x)[np.newaxis, ...], tf.convert_to_tensor(y)[np.newaxis, ...]doge_gen = slides_generator(integer_list) #next(doge_gen)

基本上你需要传递形状为 (None,size) 的 2 个数组,所以在我的例子中是 (None,10) 和 (None,1),为了实现这一点,我只是通过了 2 个重塑的张量.

您需要 None 维度作为批量大小.

I have written a generator function with Keras, before returning X,y from __getitem__ I have double check the shapes of the X's and Y's and they are alright, but generator is giving dimension mismatch array and warnings.

(Colab Code to reproduce: https://colab.research.google.com/drive/1bSJm44MMDCWDU8IrG2GXKBvXNHCuY70G?usp=sharing)

My training and validation generators are pretty much same as

class ValidGenerator(Sequence):
    def __init__(self, df, batch_size=64):
        self.batch_size = batch_size
        self.df = df
        self.indices = self.df.index.tolist()
        self.num_classes = num_classes
        self.shuffle = shuffle
        self.on_epoch_end()

    def __len__(self):
        return int(len(self.indices) // self.batch_size)

    def __getitem__(self, index):
        index = self.index[index * self.batch_size:(index + 1) * self.batch_size]
        batch = [self.indices[k] for k in index]
        
        X, y = self.__get_data(batch)
        return X, y

    def on_epoch_end(self):
        self.index = np.arange(len(self.indices))
        if self.shuffle == True:
            np.random.shuffle(self.index)

    def __get_data(self, batch):
        #some logic is written here
        #hat prepares 3 X features and 3 Y outputs 
        X = [input_array_1,input_array_2,input_array_3]
        y = [out_1,out_2,out_3]
        #print(len(X))
        
        return X, y

I am return tupple of X,y from which has 3 input features and 3 output features each, so shape of X is (3,32,10,1)

I am using functional api to build model(I have things like concatenation, multi input/output, which isnt possible with sequential) with following structure

When I try to fit the model with generator with following code

train_datagen = TrainGenerator(df=train_df,  batch_size=32, num_classes=None, shuffle=True)
valid_datagen = ValidGenerator(df=train_df,  batch_size=32, num_classes=None, shuffle=True)
model.fit(train_datagen, epochs=2,verbose=1,callbacks=[checkpoint,es])

I get these warnings and errors, that dont go away

Epoch 1/2 WARNING:tensorflow:Model was constructed with shape (None, 10) for input >Tensor("input_1:0", shape=(None, 10), dtype=float32), but it was called >on an input with incompatible shape (None, None, None).

WARNING:tensorflow:Model was constructed with shape (None, 10) for input Tensor("input_2:0", shape=(None, 10), dtype=float32), but it was called on an input with incompatible shape (None, None, None). WARNING:tensorflow:Model was constructed with shape (None, 10) for input Tensor("input_3:0", shape=(None, 10), dtype=float32), but it was called on an input with incompatible shape (None, None, None). ... ... call return super(RNN, self).call(inputs, **kwargs) /home/eduardo/.virtualenvs/kgpu3/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:975 call input_spec.assert_input_compatibility(self.input_spec, inputs, /home/eduardo/.virtualenvs/kgpu3/lib/python3.8/site-packages/tensorflow/python/keras/engine/input_spec.py:176 assert_input_compatibility raise ValueError('Input ' + str(input_index) + ' of layer ' +

ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, None, None, 88]

I have rechecked whole code and it isnt possible to have input (None,None,None) like in warning or in error, my input dimension is (3,32,10,1)

Update

I have also tried to write a generator function with python and got exactly same error.

My generator function

def generate_arrays_from_file(batchsize,df):
    #print(bat)
    inputs = []
    targets = []
    batchcount = 0
    while True:
            
            df3 = df.loc[np.arange(batchcount*batchsize,(batchcount*batchsize)+batchsize)]
            #Some pre processing
            X = [input_array_1,input_array_2,input_array_3]
            y = [out_1,out_2,out_3]
            yield X,y 
            batchcount = batchcount +1

It seems like it is something wrong internally wit keras (may be due to the fact I am using functional API)

Update 2

I also tried to output tuple

       X = (input1_X,input2_X,input3_X)
       y = (output1_y,output2_y,output3_y)

and also named input/output, but it doesnt work

        X =  {"input_1": input1_X, "input_2": input2_X,"input_3": input3_X}
        y = {"output_1": output1_y, "output_2": output2_y,"output_3": output3_y}

Note about problem formulation:

Changing the individual X features to shape (32,10) instead of (32,10,1) might help to get rid of this error but that is not what I want, it changes my problem(I no longer have 10 time steps with one feature each)

I had a similar issue with a custom generator that just had to pass a numpy array of size 10 as input and one single output.

To solve this problem i had to trasform the shape of the 2 vectors passed to the neural network like this:

def slides_generator(integer_list):
    
    # stuff happens

    x = np_ts[np_index:np_index+10] # numpy array
    y = np_ts[np_index+10] # numpy array

    yield tf.convert_to_tensor(x)[np.newaxis, ...], tf.convert_to_tensor(y)[np.newaxis, ...]
    
doge_gen = slides_generator(integer_list) #next(doge_gen)

basically you need to pass the 2 arrays with shape (None,size), so in my case were (None,10) and (None,1), and to achieve this i just passed 2 reshaped tensors.

you need the None dimension as the batch size.