C5 W4 A1: Wrong masked weights: scaled_dot_product_attention()

Error: Getting assertion error “Wrong masked error”
Function: scaled_dot_product_attention(q, k, v, mask)

Please anybody can help?

Please post a screen capture image showing all the assert messages.

There is a problem with either:

  • how you computed matmul_qk
  • or how you computed scaled_attention_logits
  • or how you applied the softmax activation

Thank you for your response Mr. TMosh.
Following is the code:

matmul_qk = tf.matmul(q, k, transpose_b=True)
dk = k.shape[0]
scaled_attention_logits = tf.math.divide(matmul_qk, np.sqrt(dk))
attention_weights = tf.keras.activations.softmax(scaled_attention_logits)