A question of Transformer

In the programming assignment: Transformers Architecture with TensorFlow, why do we set q = x, k = x, v = x in MultiHeadAttention layer??? But in the slide, q = Wq * x, k = Wk * x and v = Wv * x. Could you please explain why? Thank you in advance.

That’s how you implement self-attention.