Timesteps and window size mapping in C4_W3_Lab_1_RNN

input_shape takes care of the actual input to the model. Model input should consider the actual data shape excluding the batch dimension.
We have 1 feature in the dataset. This is why we use the lambda layer to expand dimension to (window_size, 1).

It’s sufficient to analyze the steps for 1 row since the same weights are going to be applied to rest of the rows in the batch via vectorization.

Here’s a pseudocode of RNN (ignoring details of backward propagation):

class RNN:
    def __call__(self, row):
        """
        row has shape (window_size, num_features_per_timestep)
        """
        window_size = row.shape[0]
        if self.return_sequences:
            outputs = []
        else:
            outputs = None
        hidden_state = np.zeros(...)
        for timestep in range(window_size):
            
            x_next = row[timestep]
            output, hidden_state = self.rnn_cell(prev_hidden_state=hidden_state, xt=x_next)
            if outputs is not None:
                outputs.append(output)
        if outputs:
            return outputs
        return [output]

Does this help?