Training flag not working?

i am in C5_W4_A1_Transformer_Subclass_v1 lab on UNQ_C4

i have added the training=training flag to both the mha and dropout_ffn statements but
I still get AssertionError: Wrong values when training=True

are the flags not working? do i need to add more flags?

training flag is required only for drop-out, i.e., self.dropout_ffn. It is optional for mha, since it works according to the training mode of a parent layer.

The problem is not that flag, I think. A test starts with “training=True”. So, it just says that you failed the first testing.

Implementing the EncoderLayer is relatively straightforward, but you need to be careful about merger points with a short-cut connection. For example, the input, x, is an input to “mha”, and also carried to “layernorm1”. Same to the output from “layernorm1”.

Hope this helps.