Help in Understanding Input Layers

Hello Akshay,

No these both layers are different. Lamba is added to a model to perform arbitrary operations to effectively expand the functionality of TensorFlow’s Keras. The first Lambda layer will be used to help us with our dimensionality. This layer in Week 3 converts the dimensionality from two dimensional ( when the window dataset helper function was written, it returned two-dimensional batches of Windows on the data, with the first being the batch size and the second the number of timestamps) to three-dimensional; batch size, the number of timestamps, and the series dimensionality. So here the lambda layer is helping to fix the dimensions without rewriting our Window dataset helper function. Using the Lambda, we just expand the array by one dimension and by setting input shape to none, we’re saying that the model can take sequences of any length.

Where as in Week 4, the first layer is a one dimensional convolution. As you see the attached image the dimensionality is already addressed in the Window dataset helper function, we didn’t require to change the dimensionality of the window in week 4 assignment and that is why we didn’t use lambda layer for Week 4

No codes and window size both are different for both assignments. The reason again is same for week 3 dimensionality differs, so we use bidirectional as intermediate layer where as for week 4 it is one dimensional. See the image

Hope that clears your doubts!!

Keep Learning!!!

Regards
DP