I just implemented the RNN from scratch in the assignment.
But I am still not able to understand where these weights are located and how are these being used. I really want to visualize RNNs like a Feed-forward neural network and see these weights.
The assignment does not clearly discuss what weights are inside these matrices:
def rnn_cell_forward(xt, a_prev, parameters):
“”"
Implements a single forward step of the RNN-cell as described in Figure (2)Arguments: xt -- your input data at timestep "t", numpy array of shape (n_x, m). a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m) parameters -- python dictionary containing: Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x) Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) ba -- Bias, numpy array of shape (n_a, 1) by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Can anyone please help me visualize where are these weights and how are these used, if we draw a neural network in a feed-forward fashion like we did in Course 1?