I simply called the three functions in the transformer class:
enc_output = self.encoder(inp, training, enc_padding_mask )
dec_output, attention_weights = self.decoder(inp, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output)
Can someone explain me, why my output doesnt fit?
Best!
Hinnerk8
It looks like the new revision (updated yesterday) forgot to update Transformer_test. The grader is correct. If you passed grader, it means you’re awesome!
BTW, Transformer_test should be: