When passing data to a sequential model, when last dimension of input_shape is 1, you can provide an input in squeezed form and the model will internally expand the dimension.
For instance, if input_shape=[window_size, 1] and the sequential model gets data of shape [batch_size, window_size], since the 1st layer of the model is LSTM, tensorflow will autumatically expand the input to [batch_size, window_size, 1]. The same logic applies when Conv1D is the 1st layer of the model.
[window_size, 1] is a list and (window_size, 1) is a tuple. As far as specifying input shape is concerned, it doesn’t make a difference.
No these both layers are different. Lamba is added to a model to perform arbitrary operations to effectively expand the functionality of TensorFlow’s Keras. The first Lambda layer will be used to help us with our dimensionality. This layer in Week 3 converts the dimensionality from two dimensional ( when the window dataset helper function was written, it returned two-dimensional batches of Windows on the data, with the first being the batch size and the second the number of timestamps) to three-dimensional; batch size, the number of timestamps, and the series dimensionality. So here the lambda layer is helping to fix the dimensions without rewriting our Window dataset helper function. Using the Lambda, we just expand the array by one dimension and by setting input shape to none, we’re saying that the model can take sequences of any length.
Where as in Week 4, the first layer is a one dimensional convolution. As you see the attached image the dimensionality is already addressed in the Window dataset helper function, we didn’t require to change the dimensionality of the window in week 4 assignment and that is why we didn’t use lambda layer for Week 4
No codes and window size both are different for both assignments. The reason again is same for week 3 dimensionality differs, so we use bidirectional as intermediate layer where as for week 4 it is one dimensional. See the image