#C4W2 - Exercise 4 Transformer error

Hi,
For encoder, I passed the arguments input_sentence(input to the encoder), look_ahead_mask (mask for the target input) and enc_padding_mask and for decoder I passed the arguments encoder_output(the output of encoder serves as input for the mha) and dec_padding_mask(serves as boolean mask for second mha layer). I’m getting the following errors. It will be really helpful if you can pinpoint the mistake.

1 Like

I believe the “training” variable should be a boolean, not a tensor of shape (1,7,7)

Thank you so much for the timely response. I traced back to where this self.dropout is declared which is in class Encoder. Within that the documentation it states , “training (bool): Boolean, set to true to activate the training mode for dropout layers”

1 Like

It appears that it was modified somehow by the time it got to your code in transformer.call().

Hi TMosh. Thanks for helping out. I went back to the decoder and encoder classes to see the arguments that each function call within the class accepts. Turns out, I was wrong. Encoder needs the input sentence, training flag and enc_padding_mask. Thank you for helping me debug it!

2 Likes

Nice work!

1 Like

Fingers crossed the 0/60 grading problem doesn’t arrive :sweat_smile:

2 Likes

I’m not a mentor for this course, so I don’t recognize what the “0/60 grading problem” refers to.

Maybe a course mentor will reply here.