C5_W4_A1_Transformer_Subclass_v1_Encoder_UNQ_C5

I could not find out what wrong with following code. I getting error: AssertionError Traceback (most recent call last)
in
1 # UNIT TEST
----> 2 Encoder_test(Encoder)

~/work/W4A1/public_tests.py in Encoder_test(target)
124 [[-0.4612937 , 1.0697356 , -1.4127715 , 0.8043293 ],
125 [ 0.27027237, 0.28793618, -1.6370889 , 1.0788803 ],
→ 126 [ 1.2370994 , -1.0687275 , -0.8945037 , 0.7261319 ]]]), “Wrong values case 1”
127
128 encoderq_output = encoderq(x, True, np.array([[[[1., 1., 1.]]], [[[1., 1., 0.]]]]))

AssertionError: Wrong values case 1

The code is like this: # START CODE HERE
# Pass input through the Embedding layer
x = self.embedding(x) # (batch_size, input_seq_len, embedding_dim)
# Scale embedding by multiplying it by the square root of the embedding dimension
x *= tf.math.sqrt(tf.cast(self.embedding_dim, tf.float32))
# Add the position encoding to embedding
x += self.pos_encoding[:, :seq_len, :]
# Pass the encoded embedding through a dropout layer
# use training=training
x = self.dropout(x, training=training)
# Pass the output through the stack of encoding layers
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
# END CODE HERE

Please don’t post your code on the forum. That’s not allowed by the Code of Conduct.
Is your line of code after the for-loop indented?

Of course, the issue could be in your EncoderLayer() also.

Thanks. Yes, it is indented. Also EncoderLayer_test(EncoderLayer): All tests passed

Also Transformer_test(Transformer, create_look_ahead_mask, create_padding_mask): All tests passed

Passing the tests in the notebook does not prove your code is perfect.