Dimensional size error for C4W1_Assignment Decoder test

Hi @Erika_Hall

Try to search the error on the forum and see what results you get.

For example:
Failed test case: Incorrect third dimension of decoder output

They might or might not solve your problem entirely but at least they can offer you hints.

Cheers

1 Like

Hi, thank you for your message. I tried investigating the error on the forum, and I found –

  • result 1: I am not using global variables in the _init_ method of the Decoder class
  • result 2: I didn’t hardcode any integers into my code
  • result 3: loops me back to the spot I originally asked the question

I am at a loss to where I can find help on the forum at this point. How would you recommend resolving the issue?

1 Like

Hi @Erika_Hall

If you have the same error output as the post creator, then the issue is the way you are recalling logits.

Remember.

remember (64,14) is the vocab_size and unit and 256 is a batch of sentences used to translate English to Portuguese.

So when the logits shape expected output is (64, 15, 12000)
12,000 is the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary,

This mean vocab_size for context, right-sided translation and logits is same, so this is not issue with your codes

but units don’t match for contexts but do match for right-sighted translation as well as logits, so for the code line

The dense layer with logsoftmax activation in the class Decoder(tf.keras.layers.Layer):
requires you to recall units as vocab_size as it is mentioned before the grader cell for this dense layer: This one should have the same number of units as the size of the vocabulary since you expect it to compute the logits for every possible word in the vocabulary.

Regards
DP

1 Like

Thank you, I missed the fact that the logsoftmax function should be initiated with the vocab_size as opposed to with the units. Using the vocab_size makes a lot more sense though because the layer is calculating for the probability of each word in the vocab.

I’m past the issue!

3 Likes

I noticed one small detail that I’d like to adjust (for future readers) - the Dense layer needs the vocab_size not the logsoftmax. The logsoftmax does not “care” about the output size (it just normalization).

Cheers

2 Likes