Week 2, should make more clear what the input is for the initial hidden state in the lecture

For RNN, there is an initial hidden state as the input. However, this was not mentioned anywhere in the lectures, but in the assignment only.

Also in the second tutorial:, Embedding size, and hidden state size come from nowhere.
Is embedding layer x^{<t_1>}, or h^{<t_0>} , or something else?
And what’s hidden state size referring to?

I think this should be made clear for each model what the input data structure should be. RNN as a standard alone model, should have a clear input data structure mentioned in the beginning of the lecture.

Hi @Eureka

I would agree that it was not mentioned explicitly but it is present in every lecture - as the very first arrow from the left (and notation h^{<t_0>}).

I tried to explain the RNN routine here and also the embedding here

I would also agree on this point and I think that is why the course has the Labs. Different people learn differently and for example, me personally, I understand the process better when I try to duplicate computations (implement the model) in excel (vs. trying to understand the diagrams and visualisations). I find actual numbers and concrete computations help me internalize the workings of Deep Learning. They help me better understand the structure, the influence of different layers, weights etc.

For example, I guess someone else find diagrams like this more explanatory:

But to me, this is better:

And this is even better (when you implement it yourself):

Also this forum is also the place to find clarity. So if you still find something difficult to understand or you feel that you do not understand it fully, please ask questions. :slight_smile: