Neural_machine_translation_with_attention_v4a

The layer “concatenate” has multiple inbound nodes, with different output shapes. Hence the notion of “output shape” is ill-defined for the layer. Use get_output_shape_at(node_index) instead.

This error pops when trying to print the summary of the model.

def modelf(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
   
 # mentor edit: code removed
   
 return model

There is an error in your one_step_attention() function. That’s where “concatenator()” is used.

And please do not post your code on the Forum. The Honor Code prevents that. I have edited your message to remove the code.