C5W3A1: Ops Out of Order

In the notebook Neural_machine_translation_with_attention_v4a, modelf_test() is failing because although all the ops are there and correct, their order in the summary is different:

[['InputLayer', [(None, 64)], 0], ['InputLayer', [(None, 30, 37)], 0], ['RepeatVector', (None, 30, 64), 0, 30], ['Bidirectional', (None, 30, 64), 17920], ['Concatenate', (None, 30, 128), 0], ['Dense', (None, 30, 10), 1290, 'tanh'], ['Dense', (None, 30, 1), 11, 'relu'], ['Activation', (None, 30, 1), 0], ['Dot', (None, 1, 64), 0], ['InputLayer', [(None, 64)], 0], ['LSTM', [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], 'tanh'], ['Dense', (None, 11), 715, 'softmax']]
Test failed at index 0 
 Expected value 

 ['InputLayer', [(None, 30, 37)], 0] 

 does not match the input value: 

 ['InputLayer', [(None, 64)], 0]

I think this may be due to a difference in the TF or keras libraries (I am using the Coursera runtime, and not a local one), and not because of a problem in my implementation. Or perhaps not? How can this be fixed?

This was user error, and failure to follow instructions on my part. The problem was the order of a concatenation, which while it wouldn’t impact model performance, it did make the grader unhappy. I still don’t understand how it impacted the order of ops, however.

Thank you for sharing how you resolved it.