Dimensional size error for C4W1_Assignment Decoder test

Hi @Erika_Hall

Try to search the error on the forum and see what results you get.

For example:
Failed test case: Incorrect third dimension of decoder output

They might or might not solve your problem entirely but at least they can offer you hints.

Cheers

1 Like

Hi, thank you for your message. I tried investigating the error on the forum, and I found –

  • result 1: I am not using global variables in the _init_ method of the Decoder class
  • result 2: I didn’t hardcode any integers into my code
  • result 3: loops me back to the spot I originally asked the question

I am at a loss to where I can find help on the forum at this point. How would you recommend resolving the issue?

1 Like

Hi @Erika_Hall

If you have the same error output as the post creator, then the issue is the way you are recalling logits.

Remember.

remember (64,14) is the vocab_size and unit and 256 is a batch of sentences used to translate English to Portuguese.

So when the logits shape expected output is (64, 15, 12000)
12,000 is the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary,

This mean vocab_size for context, right-sided translation and logits is same, so this is not issue with your codes

but units don’t match for contexts but do match for right-sighted translation as well as logits, so for the code line

The dense layer with logsoftmax activation in the class Decoder(tf.keras.layers.Layer):
requires you to recall units as vocab_size as it is mentioned before the grader cell for this dense layer: This one should have the same number of units as the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary.

Regards
DP

1 Like

Thank you, I missed the fact that the logsoftmax function should be initiated with the vocab_size as opposed to with the units. Using the vocab_size makes a lot more sense though because the layer is calculating for the probability of each word in the vocab.

I’m past the issue!

3 Likes

I noticed one small detail that I’d like to adjust (for future readers) - the Dense layer needs the vocab_size not the logsoftmax. The logsoftmax does not “care” about the output size (it just normalization).

Cheers

2 Likes

I had no unit test failing up until now ,ended h=up getting this error regardless. Any ideas where i may be going wrong ?

@ayildiz

make sure your dense layer is using correct unit recall as per instructions it should be vocab_size which would give you the correct third dimension value of 12000 for the logit shape. Another issue could be using incorrect recall function or sequence to translate the sequence.

did you use self function for the decoder ?

1 Like

Thanks Deepti.

On my Dense layer, units is indeed equal to vocab_size. If you pay attention to the error message, in fact i am providing 12,000 whilst the unit test expects 16 as the third dimension value.

Actually i was not using self.encoder or self.decoder on the call function for Translator. This is resolved now! thanks for the help :slight_smile:

2 Likes

@ayildiz By analyzing the error, I believe that @Deepti_Prasad has correctly identified the issue. As mentioned in the solution, the problem is related to vocab_size, which corresponds to the third dimension of the output. Additionally, when using self.function, vocab_size is also explicitly specified, reinforcing the source of the discrepancy.

And it is good that you were able to identify the error. :grinning_face:

hi @ayildiz

I knew where the error was but I was covering all the points where to look upon. when your codes run successfully with correct logit shape but the test failed, I understood it was the self.function recall which was missed.

if you see the thread its about similar error, and other learners have encountered this issue.

Your debugging steps to print the shape values was right way to go.

We can’t give direct answers only a hint here and there.

1 Like