If you use, as the first layer:
tf.keras.layers.SimpleRNN(40, return_sequences = True, input_shape = [window_size, 1])
Then you can avoid that initial Lambda that does tf.expand_dims, right? The method I suggest above is used in week 4 (where a Conv1D is the initial layer). I don’t understand why the dimension expanding Lambda is used in week 3. I think I am missing something ?
Input to recurrent layers is of form (batch size, num timesteps, num features per timestep)
. It’s good to follow this convention even though tensorflow might automatically add 1 to the last dimension.
Things will become apparent about the shape information when there is more than 1 feature per timestep. So, having that lambda layer is okay (although might be unnecessary for this problem).