C5_w5_a1 unq_c4

Hello community,

I pass the unit test reading several topics on forum but I don’t understand what I am doing and why is it working… :frowning:

Can you explain me why the argument of self.mha() is x,x,x,mask ?
self_mha_output = self.mha(x,x,x,mask)

When I look into the function init I see
self.mha = MultiHeadAttention(num_heads=num_heads, key_dim=embedding_dim, dropout=dropout_rate)

I don’t understand where come from the
self.mha(query=..., value=..., key=... attention_mask=...)

Best regards

I had the same issue with the part of UNQ_C7
x, block1, block2 = self.dec_layers[i](...)

I found the right values but I don’t understand where is the documentation for the DecoderLayer() function or maybe is it Decoder().

I finally understand than we use the method call().

On Keras documentation there is the argument used by this method : tf.keras.layers.MultiHeadAttention  |  TensorFlow v2.11.0

For DecoderLayer() I defined it “my self” previously in the notebook

class DecoderLayer(tf.keras.layers.Layer):
[...]
def call(self, x, enc_output, training, look_ahead_mask, padding_mask)
1 Like