Why are we using the activation values of previous training example as initalization of activation of new training value instead of initializing it with zero? Am I missing something or did I get it wrong?
Please provide the week of the course and the name of the assignment you’re working on.
Course 5, week 1 assignment 2 dinosaur naming
We’re using each output as the next input because we’re forming a word one letter at a time, based on the previous letters.
If we see the function it is implemented such that the last activation for a training example is used as the input to a new training example instead of vector of zeros
I am sorry for the duplication. I went through the thread and not quite sure whether I get it or not. So is it such that how we initialize the value of a_prev is not important. If we give the same input, the model will always the same output. Am I somewhat getting it?
I hope it is clear now after our discussion. If you have any other queries regarding this, do let us know.