Week 2, should make more clear what the input is for the initial hidden state in the lecture

Hi @Eureka

I would agree that it was not mentioned explicitly but it is present in every lecture - as the very first arrow from the left (and notation h^{<t_0>}).

I tried to explain the RNN routine here and also the embedding here

I would also agree on this point and I think that is why the course has the Labs. Different people learn differently and for example, me personally, I understand the process better when I try to duplicate computations (implement the model) in excel (vs. trying to understand the diagrams and visualisations). I find actual numbers and concrete computations help me internalize the workings of Deep Learning. They help me better understand the structure, the influence of different layers, weights etc.

For example, I guess someone else find diagrams like this more explanatory:

But to me, this is better:

And this is even better (when you implement it yourself):

Also this forum is also the place to find clarity. So if you still find something difficult to understand or you feel that you do not understand it fully, please ask questions. :slight_smile:

Cheers