If you have the same error output as the post creator, then the issue is the way you are recalling logits.
Remember.
remember (64,14) is the vocab_size and unit and 256 is a batch of sentences used to translate English to Portuguese.
So when the logits shape expected output is (64, 15, 12000)
12,000 is the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary,
This mean vocab_size for context, right-sided translation and logits is same, so this is not issue with your codes
but units don’t match for contexts but do match for right-sighted translation as well as logits, so for the code line
The dense layer with logsoftmax activation in the class Decoder(tf.keras.layers.Layer):
requires you to recall units as vocab_size as it is mentioned before the grader cell for this dense layer: This one should have the same number of units as the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary.
Thank you, I missed the fact that the logsoftmax function should be initiated with the vocab_size as opposed to with the units. Using the vocab_size makes a lot more sense though because the layer is calculating for the probability of each word in the vocab.
I noticed one small detail that I’d like to adjust (for future readers) - the Dense layer needs the vocab_size not the logsoftmax. The logsoftmax does not “care” about the output size (it just normalization).
make sure your dense layer is using correct unit recall as per instructions it should be vocab_size which would give you the correct third dimension value of 12000 for the logit shape. Another issue could be using incorrect recall function or sequence to translate the sequence.
On my Dense layer, units is indeed equal to vocab_size. If you pay attention to the error message, in fact i am providing 12,000 whilst the unit test expects 16 as the third dimension value.
Actually i was not using self.encoder or self.decoder on the call function for Translator. This is resolved now! thanks for the help
@ayildiz By analyzing the error, I believe that @Deepti_Prasad has correctly identified the issue. As mentioned in the solution, the problem is related to vocab_size, which corresponds to the third dimension of the output. Additionally, when using self.function, vocab_size is also explicitly specified, reinforcing the source of the discrepancy.
And it is good that you were able to identify the error.
I knew where the error was but I was covering all the points where to look upon. when your codes run successfully with correct logit shape but the test failed, I understood it was the self.function recall which was missed.
if you see the thread its about similar error, and other learners have encountered this issue.
Your debugging steps to print the shape values was right way to go.
We can’t give direct answers only a hint here and there.