Help in Understanding Input Layers

In C4W3 we defined the model as having the first layer as the Lambda layer where we are adding the new dimension.

tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1),
                      input_shape=[window_size])
tf.keras.layers.LSTM()

In C4W4 we defined a model where the first layer is Conv1D, but we didn’t add any new dimension.

tf.keras.layers.Conv1D(filters=5, kernel_size=2, activation="relu",
                      input_shape=[window_size, 1])
tf.keras.layers.LSTM()

As per my understanding, the input shape of the data will be of dimension (batch_size, window_size). I have some doubts:

  1. In the case of Conv1D, why we have not added the Lambda layer?

  2. Are both codes below are same?:
    ([window_size, 1] vs (window_size, 1))

tf.keras.layers.Conv1D(filters=5, kernel_size=2, activation="relu",
                      input_shape=[window_size, 1])

and

tf.keras.layers.Conv1D(filters=5, kernel_size=2, activation="relu",
                      input_shape=(window_size, 1))

Thanks in advance.

are these questions from assignment section or ungraded lab?

  1. When passing data to a sequential model, when last dimension of input_shape is 1, you can provide an input in squeezed form and the model will internally expand the dimension.
    For instance, if input_shape=[window_size, 1] and the sequential model gets data of shape [batch_size, window_size], since the 1st layer of the model is LSTM, tensorflow will autumatically expand the input to [batch_size, window_size, 1]. The same logic applies when Conv1D is the 1st layer of the model.
  2. [window_size, 1] is a list and (window_size, 1) is a tuple. As far as specifying input shape is concerned, it doesn’t make a difference.

Hello Akshay,

No these both layers are different. Lamba is added to a model to perform arbitrary operations to effectively expand the functionality of TensorFlow’s Keras. The first Lambda layer will be used to help us with our dimensionality. This layer in Week 3 converts the dimensionality from two dimensional ( when the window dataset helper function was written, it returned two-dimensional batches of Windows on the data, with the first being the batch size and the second the number of timestamps) to three-dimensional; batch size, the number of timestamps, and the series dimensionality. So here the lambda layer is helping to fix the dimensions without rewriting our Window dataset helper function. Using the Lambda, we just expand the array by one dimension and by setting input shape to none, we’re saying that the model can take sequences of any length.

Where as in Week 4, the first layer is a one dimensional convolution. As you see the attached image the dimensionality is already addressed in the Window dataset helper function, we didn’t require to change the dimensionality of the window in week 4 assignment and that is why we didn’t use lambda layer for Week 4

No codes and window size both are different for both assignments. The reason again is same for week 3 dimensionality differs, so we use bidirectional as intermediate layer where as for week 4 it is one dimensional. See the image

Hope that clears your doubts!!

Keep Learning!!!

Regards
DP