I passed the unit test for one-step attention. In the model exercise, I got the error saying

“ValueError: Layer lstm expects 59 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor ‘dot/MatMul_37:0’ shape=(None, 1, 64) dtype=float32>, <tf.Tensor ‘s0_48:0’ shape=(None, 64) dtype=float32>, <tf.Tensor ‘c0_48:0’ shape=(None, 64) dtype=float32>]”

I think this comes from my post post-attention LSTM cell code:

s, _, c = post_activation_LSTM_cell(inputs=context, initial_state=[s, c])

Given that my a and context both have the expected shape (None, 30, 64) and (None, 1, 64), I don’t see what’s going wrong.