C4W2 getting error in Transformer Decoder Layer

I implemented the Transformer Summarizer Decoder Layer Exercise no 2 according to the documentation but getting error. I have reviewed my code many times but still getting the below error

—> 58 mult_attn_out1, attn_weights_block1 = self.mha1(x,x,x,look_ahead_mask)
** 60 # apply layer normalization (layernorm1) to the sum of the attention output and the input (~1 line)**
** 61 Q1 = self.layernorm1(x + mult_attn_out1)**

ValueError: Exception encountered when calling layer ‘decoder_layer’ (type DecoderLayer).

not enough values to unpack (expected 2, got 1)

Call arguments received by layer ‘decoder_layer’ (type DecoderLayer):
** • x=tf.Tensor(shape=(1, 15, 12), dtype=float32)**
** • enc_output=tf.Tensor(shape=(1, 7, 8), dtype=float64)**
** • training=False**
** • look_ahead_mask=tf.Tensor(shape=(1, 15, 15), dtype=float32)**
** • padding_mask=None**

Hi @Muhammad_Hamza9

You’re missing return_attention_scores=True parameter for MultiHeadAttention.
mult_attn_out1, attn_weights_block1 are two variables for outputs, if the return_attention_scores=True is missing, then only mult_attn_out1 is returned.

Cheers

1 Like