Confusion with input_shape of Lambda layer and Conv1D layer for time series dataset

In week 3 , Laurence used a Lambda layer to expand the dimensions of input to specify univariate input and the input_shape for the Lambda layer was [window_size]. But in week 4, he directly used a Conv1D layer with input_shape = [window_size, 1] without the Lambda layer for expanding dimensions for the same dataset. From where did the last dimension come from? Kindly clarify


2

1 Like

For both LSTM and Conv1D layers, each input should be of form (num timesteps, num features per timestep). So, when model inputs don’t satisfy this shape, the lambda layer helps with adding the additional dimension.

If both LSTM and Conv1D expect the inputs of the same form, shouldn’t there be a Lambda layer to first expand the dimensions of the input before the Conv1D layer?

Did you see the shapes before feeding to the model?

Yes, the input shapes were 2D (batch dim, time steps) as shown in the screenshot below.

If you take batch dimension into consideration, input should be of format (batch_size, num timesteps, num features per timestep) for LSTM and Conv1D layers.

Hope this clears things up on why the 3rd dimension was added.

I’m still confused as to why the dimension of the input is not increased with expand_dims before sending the input to the Conv1D layer. Can you kindly clarify?

Is the input to the model of shape (batch_size, num timesteps, num features per timestep) ?

If you pass a 2D vector to keras layer Conv1D, keras implicitly adds the batch size dimension as the first dimension, making it a 3D tensor.

No, the input to the model is of shape (batch_size, num timesteps) as I had specified in the previous image where the batch_size=16, num timesteps=20.

I am sending an input of shape (batch_dim, num_time_steps). If keras adds another batch dimension, the dimension of the input (new_batch_dim, batch_dim, num_time_steps) will not satisfy the input shape specified in the first layer (num_time_steps, 1) when the model was created.

I am suspicious that Keras can automatically expand the dimensions of the input if the shape of the input doesn’t match. If I send the input of shape (batch_size, num_timesteps) to the model below with the input_shape of the first layer as [window_size, 2] (num_timesteps = window_size, num features per timestep = 2), TensorFlow throws an error. But if the input_shape of the first layer is changed to [window_size, 1], the model works fine. Can anyone confirm if my suspicion is true?

4

You are sending a 2D shape. For you it is clear that this is [batch_size, num_timesteps] but for keras this is a 2D shape and keras will expand it to a 3D.

This is exactly what I shared before: If you pass a 2D vector to keras layer Conv1D, keras implicitly adds the batch size dimension as the first dimension, making it a 3D tensor.

So kindly correct me if my understanding is right based on your explanation. When I pass any 2D Tensor in this case of shape [batch_size, num_timesteps] to the Conv1D layer, it expands the input dimension to shape [None, batch_size, num_timesteps] where the None refers to the batch size of any value.

If you pass a 2D vector to Conv1D, keras will take it as these dimensions being the timesteps and the features, and it will automatically conver it to a (batch_size, time_step, feature) = (1, x, y).

So even if you are passing (batch_size, time_steps) as in your example above, for keras, this is not (batch_size, time_step) but (time_step_features) and it will convert it to (1, x, y) where x is your batch_size and y is your time_steps).

For example, lets say you pass (1000, 10) where 1000 is the batch size and 10 is the time steps in your model. When you pass this to a Conv1D, keras will turn it into (1, 1000, 10).

But the first layer of the model expects the expanded dimension to be at the end [time_step, 1] but Keras adds the extra dimension at the beginning of the input [1, batch_size, time_step]. Why is Keras not throwing an error?

@SanthoshSankar , my apologies - I am reviewing my code on a model I have that uses Conv1D - I was sure that it was receiving the 2D tensor, and after looking at the preprocessing function I built, I do have a condition where I reshape the 2D to 3D. My line of code reads like so:

if len(x.shape) == 2:
x = x[np.newaxis, :, :]

For some reason I was sure Conv1D was taking my 2D and had forgotten about it.

However, when using Conv1D as the first layer in the CNN model, it may accept the 2D vector, but you need to declare the shape. Please see this link on the keras documentation:

Keras Conv1D

It mentions:

" When using this layer as the first layer in a model, provide an input_shape argument (tuple of integers or None , e.g. (10, 128) for sequences of 10 vectors of 128-dimensional vectors, or (None, 128) for variable-length sequences of 128-dimensional vectors."

So putting all together:

input_shape = (10, 128)
x = tf.random.normal(input_shape)

if len(x.shape) == 2:
x = x[np.newaxis, :, :]

y = tf.keras.layers.Conv1D(32, 3, activation=‘relu’, input_shape=input_shape[2:])(x)
print(y.shape)

Note: the reshaping I originally did it in a separate, preprocess, method. I have included it inline with the call to keras for illustration.

Again, my apologies for the confusion.

Juan

1 Like