I keep getting the ValueError: Input 0 of layer sequential is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, None).
The train_set is a PrefetchData with shape ((None, None), (None,)), types: (tf.float64, tf.float64)
@balaji.ambresh I am getting the same ValueError. The loop you suggested returns (256, 64, 1) (256, 1) in my case.
What is the relationship between the window size and the input of the model? Why is input_shape sometimes [G.WINDOW_SIZE, 1] and sometimes [NONE, 1]? (Sorry, I don’t get it from the videos.)
And where do I have to adjust the model in order to make it work?
input_shape specifies the size of a single data point within a batch i.e. a single row (a batch is made of multiple rows of data).
The 1st model layer should have the shape information.
@Marco_Stallmann , Here are my thoughts based on my own personal understanding. I hope that they do not contain any misconceptions but do justice to your question.
In normal Neural network architecture, you use minibatches. For time series data, the mini batches must correspond to a number of windows that reflect the periodic nature of the time-series data. For instance, If window_size = 1, it means that TensorFlow will use one value to predict the next one. Remember that time series doesn’t predict a different y variable but predicts itself.
What your loop is returning means that TensorFlow will predict the next 64 (window size), but you have 256 data points per batch. So you actually have 4 iterations. 64 will predict the next 64 and learn weights. 128 will predict the next 64 and update weights, 192 will be used to predict the next 64 and update weights. (making 256 batch size) etc.Until you come up with a properly trained model.
The input to the model is a “generator” function that pumps in the windowed data appropriately.