Hello,

In the video, it is shown that the input X0, X1, X2… are batches mapped to each timestep. However in the lab exercises, for window size = 20, batch size = 32, it is mentioned that: ‘This means the 20 datapoints in the window will be mapped to 20 timesteps of the RNN’

I am a bit confused about the timesteps. My question is:

- Are each of the 20 values mapped to each of the timesteps OR
- Is the same batched window of 20 values mapped to each of the timesteps?

I think the 2nd one is correct. Request you to clarify if the understanding is right?

Regards

Aroonima

I do see the need to revisit the fundamentals in the DLS course.

However, I am trying to understand in the context of the lab exercise -C4_W3_Lab_1_RNN.

Does it mean that each value in the batched widow of 20 values is without mention mapped to each of 20 timesteps. Actually, the input_shape argument takes care of it?

So , lets say in the 1st time step we pass the first value .In the example there are 32 values as the windows have been batched, and since there are 40 cells we get the output dimension from that single 1st timestep to be (32*40). This output then passes to the second timestep which receives the 2nd input of the 32 batches ie 32 values and so on…

I hope this is how it is.

`input_shape`

takes care of the actual input to the model. Model input should consider the actual data shape excluding the batch dimension.

We have 1 feature in the dataset. This is why we use the lambda layer to expand dimension to `(window_size, 1)`

.

It’s sufficient to analyze the steps for 1 row since the same weights are going to be applied to rest of the rows in the batch via vectorization.

Here’s a pseudocode of RNN (ignoring details of backward propagation):

```
class RNN:
def __call__(self, row):
"""
row has shape (window_size, num_features_per_timestep)
"""
window_size = row.shape[0]
if self.return_sequences:
outputs = []
else:
outputs = None
hidden_state = np.zeros(...)
for timestep in range(window_size):
x_next = row[timestep]
output, hidden_state = self.rnn_cell(prev_hidden_state=hidden_state, xt=x_next)
if outputs is not None:
outputs.append(output)
if outputs:
return outputs
return [output]
```

Does this help?

It’s going above my head. Sorry, but it a bit confusing.

Please brush up on sequence models (deep learning specialization) and look at this again.