C5_W4_A1_Transformer_Subclass_v1 task #8

Hi, please I need help

I have passed all the tests except for the last one and I got the following error;

InvalidArgumentError: Incompatible shapes: [1,5,4] vs. [1,4,4] [Op:AddV2]

it seems the error is coming from

x += self.pos_encoding[:, :seq_len, :]

The embedding_dim is 4 while the input_sentence seq_len is 5.

Hello casper,

Can you share the screenshot of the error you are getting? Do not post your codes here please!!!


Hello Casper,

Your positional encoding and add to word embedding is correct but

< # scale embeddings by multiplying by the square root of their dimension needs a look up,

looks like you have missed to scale your embedding by multiplying it by the square root of your embedding dimension. You only casted the embedding dimension to data type tf.float32 before computing the square root.

So make sure you are using the correct tf function call for that code line.

Incase you are not able to find, you can send me the notebook via personal DM.


The auto-grader gave an error which I think might be related

Code Cell UNQ_C2: Unexpected error (ValueError('operands could not be broadcast together with shapes (1,4,8) (1,16,8) ')) occurred during function check. We expected function positional_encoding to return type <class ‘tuple’>. Please check that this function is defined properly.

Casper can you please send me the notebook via personal DM. Click my name and then message.

you have tuple the self_embedding_dim and tf.float32 incorrectly. You also have not recalled that code line with one tf. function which causing positional coding error.

ok Sir

I just fond the error, it was just a minor error. I referenced positions as position in positional_encoding function. I omitted the ‘s’

ok is your issue resolved? let me know if you pass the grader once submitted.

1 Like

Yes Sir

Successfully submitted, 100%. Thanks Sir