[Week 3] Machine Translation: Code help

For the model function I cannot figure out for the life of me why my code does not get the right output. I follow the instructions and keep getting this as my output:

[[‘InputLayer’, [(None, 30, 37)], 0], [‘Bidirectional’, (None, 30, 64), 17920], [‘Dense’, (None, 30, 10), 650, ‘tanh’], [‘Dense’, (None, 30, 1), 11, ‘relu’], [‘Activation’, (None, 30, 1), 0], [‘Dot’, (None, 1, 64), 0], [‘InputLayer’, [(None, 64)], 0], [‘InputLayer’, [(None, 64)], 0], [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’], [‘Dense’, (None, 11), 715, ‘softmax’]]

any hints or ideas on how to fix it?

Note, I deleted my previous reply, it was incorrect.

Does your one_step_attention() function use the “repeator()” helper function?

Because that’s what causes “RepeatVector()” to appear in the model, and it’s missing from your model summary.

Same for the “concatenator()” helper function.

yes they’re both in there. I passed all the tests for that one its just modelf that’s acting funny

Passing the tests in the notebook does not mean your code is perfect.

In fact the tests for one_step_attention() are very sparse.

Update for students who may find this thread:

Be sure that your one_step_attention() uses the correct arguments for each function call. The notebook’s test cases do not check for this.