Sequence Model: Encoding in Beam Search

In sequence model say for ex Beam Search we have 2 blocks encoder and decoder. The encoder block takes in input sentence and converts it into vector representation i.e., embeddings that are fed to decoder block.
The decoder network takes this as input and does the prediction.
For eg we are doing language translation. My doubt is the input is in form of vectors to decoder model does it directly start predicting English words or its predicts a vector which at the end of complete prediction of sentence is converted to English language.

1 Like

It outputs the probability distribution (a vector of probabilities of each word) over the vocabulary.

2 Likes

Thanks for the clarification. Post getting probabilities we again use embedding algorithm to get the exact sentence?

No. From the probability distribution, we select the word with the highest probability (greedy search ) or beam search. Embeddings are used at the beginning to convert words into vectors that the neural network can process.

1 Like