A problem with modelf() function A1 EX2

Hi, i have a problem when testing the modeIf() function (all the previous tests are passed successfully) , i followed the instructions but i got an error ,the problem is that len(model.outputs) is expected to be 10 which is right in my case but the test is not passing
******this is the error :---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in
35
36
—> 37 modelf_test(modelf)

in modelf_test(target)
32 assert len(model.outputs) == 10, f"Wrong output shape. Expected 10 != {len(model.outputs)}"
33
—> 34 comparator(summary(model), expected_summary)
35
36

~/work/W3A1/test_utils.py in comparator(learner, instructor)
16 def comparator(learner, instructor):
17 if len(learner) != len(instructor):
—> 18 raise AssertionError(“Error in test. The lists contain a different number of elements”)
19 for index, a in enumerate(instructor):
20 b = learner[index]

AssertionError: Error in test. The lists contain a different number of elements
****** this is the fucntion i used: # UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)

{moderator edit - solution code removed}

Please read the error trace a bit more carefully: that error is not talking about the length of the outputs of your model. That was the previous assertion and you passed that one. The one that actually failed is talking about the number of layers in your model, which does not match the expected number of layers.

You can print out the layers of both models like this:

print("Generated model:")
for index, a in enumerate(summary(model)):
    print(f"layer {index}: {a}")
print("Expected model:")
for index, a in enumerate(expected_summary):
    print(f"layer {index}: {a}")

What is different about your model?

1 Like

Generated model:
layer 0: [‘InputLayer’, [(None, 30, 37)], 0]
layer 1: [‘InputLayer’, [(None, 64)], 0]
layer 2: [‘Bidirectional’, (None, 30, 64), 17920]
layer 3: [‘RepeatVector’, (None, 30, 64), 0, 30]
layer 4: [‘Concatenate’, (None, 30, 128), 0]
layer 5: [‘Dense’, (None, 30, 10), 1290, ‘tanh’]
layer 6: [‘Dense’, (None, 30, 1), 11, ‘relu’]
layer 7: [‘Activation’, (None, 30, 1), 0]
layer 8: [‘Dot’, (None, 1, 64), 0]
layer 9: [‘InputLayer’, [(None, 64)], 0]
layer 10: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 11: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 12: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 13: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 14: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 15: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 16: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 17: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 18: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 19: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 20: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 21: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 22: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 23: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 24: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 25: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 26: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 27: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 28: [‘Dense’, (None, 11), 715, ‘softmax’]
layer 29: [‘Dense’, (None, 11), 715, ‘softmax’]
Expected model:
layer 0: [‘InputLayer’, [(None, 30, 37)], 0]
layer 1: [‘InputLayer’, [(None, 64)], 0]
layer 2: [‘Bidirectional’, (None, 30, 64), 17920]
layer 3: [‘RepeatVector’, (None, 30, 64), 0, 30]
layer 4: [‘Concatenate’, (None, 30, 128), 0]
layer 5: [‘Dense’, (None, 30, 10), 1290, ‘tanh’]
layer 6: [‘Dense’, (None, 30, 1), 11, ‘relu’]
layer 7: [‘Activation’, (None, 30, 1), 0]
layer 8: [‘Dot’, (None, 1, 64), 0]
layer 9: [‘InputLayer’, [(None, 64)], 0]
layer 10: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 33024, [(None, 1, 64), (None, 64), (None, 64)], ‘tanh’]
layer 11: [‘Dense’, (None, 11), 715, ‘softmax’] the last two layers are repeating several times i don’t know why is there something wrong with my code?

Yes, there is something wrong with your code. Please take a more careful look at the instructions both before the code and in the comments in the code. The way you have coded the LSTM layer there has a very different effect than what they told you to do.

1 Like

thank you i solved the problem i used LSTM() instead of post_activation_LSTM_cell()

Right! It’s a subtle point, but we have to keep in mind how the “Layer” functions work: when you invoke LSTM, it returns you a function. Then you call that function. The way they did it, they invoked LSTM once to create one function and then we call it every iteration of the loop. The way you originally did it invokes LSTM the same way that they did, but you did it every iteration of the loop. So that means you get a different function every iteration of the loop.

Glad to hear you were able to solve it just based on the suggestions above!

1 Like

oh thank you for the explanation that’s why i found that LSTM is repeated several times in the previous model because it’s called at each loop,interesting…

2 Likes