Hello Everyone, Merry Christmas!
I have following questions regarding RNNs:
I understand that in case of Unidirectional RNNs for training speech recognition model in hidden state as x we give y as an input, which is the correct word from the previous hidden state. But I think Andrew in the lecture hasn’t mentioned how it’s done in case of BRNNs? Do we take into account correct words from both, previous and following hidden states as input x?
Is beam search applicable to Speech Recognition too?
In Machine translation, particularly in encoder part, is the same logic - giving the previous state correct word as an input to the current state applied?