I’ve checked and checked but can’t seem to get passed:
AssertionError: Error in test. The lists contain a different number of elements
Any help would be appreciated.
I’ve checked and checked but can’t seem to get passed:
AssertionError: Error in test. The lists contain a different number of elements
Any help would be appreciated.
The issue occurs in cell 8:
output = summary(model)
comparator(output, djmodel_out).
//jhc
Have you tried printing out the two model summaries? The error message is telling you that they have different numbers of layers, so it should be pretty clear what you are missing if you compare the two. E.g. add a cell like this (Insert → Cell Below):
print("Generated model:")
for index, a in enumerate(summary(model)):
print(f"layer {index}: {a}")
print("Expected model:")
for index, a in enumerate(djmodel_out):
print(f"layer {index}: {a}")
When I do that, I see 36 layers in both. Then when I run the following comparator test, it passes. Here’s my generated model output:
Generated model:
layer 0: ['InputLayer', [(None, 30, 90)], 0]
layer 1: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 2: ['Reshape', (None, 1, 90), 0]
layer 3: ['InputLayer', [(None, 64)], 0]
layer 4: ['InputLayer', [(None, 64)], 0]
layer 5: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 6: ['LSTM', [(None, 64), (None, 64), (None, 64)], 39680, [(None, 1, 90), (None, 64), (None, 64)], 'tanh']
layer 7: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 8: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 9: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 10: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 11: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 12: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 13: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 14: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 15: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 16: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 17: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 18: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 19: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 20: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 21: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 22: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 23: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 24: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 25: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 26: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 27: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 28: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 29: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 30: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 31: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 32: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 33: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 34: ['TensorFlowOpLayer', [(None, 90)], 0]
layer 35: ['Dense', (None, 90), 5850, 'softmax']
How many layers does yours have? Why is it not 36?
Thanks, Paul. That helped. I just had ‘outputs.append(a)’ instead of ‘outputs.append(out)’.
//james
{mentor edit: duplicated information removed}
Hi everyone, I have the same issue as James. And I appended out to outputs correctly. But still don’t have 36 layers. I don’t know why. Please help. This is my model output:
Generated model:
layer 0: [‘InputLayer’, [(None, 30, 90)], 0]
layer 1: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 2: [‘Reshape’, (None, 1, 90), 0]
layer 3: [‘InputLayer’, [(None, 64)], 0]
layer 4: [‘InputLayer’, [(None, 64)], 0]
layer 5: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 6: [‘LSTM’, [(None, 64), (None, 64), (None, 64)], 39680, [(None, 1, 90), (None, 64), (None, 64)], ‘tanh’]
layer 7: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 8: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 9: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 10: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 11: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 12: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 13: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 14: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 15: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 16: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 17: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 18: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 19: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 20: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 21: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 22: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 23: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 24: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 25: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 26: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 27: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 28: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 29: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 30: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 31: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 32: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 33: [‘TensorFlowOpLayer’, [(None, 90)], 0]
layer 34: [‘Dense’, (None, 90), 5850, ‘softmax’]
Hi, Marina.
If you compare the summary output you show with the one I showed above, everything looks the same, but the number of TensorFlowOpLayers that you have is one less. In both cases, they start at layer 7, but yours end at layer 33 and mine end at layer 34. So now the question is what would cause that. I’m looking at my code now and trying to figure out where those OpLayers come from and where the number 28 comes from. At first glance, it doesn’t appear anywhere in the configuration. Maybe the closest is T_x which is 30. What did you use as the “range” on the for loop there? The comment in the template makes it pretty clear that it should be Tx
.
Ok, we have a theory here: I just tried using Tx - 1
as the range
on that loop and I get exactly the problem you show. Note that indexing is all 0-based in python, including arrays and loops. Try running this for
loop and watch what happens:
for ii in range(5):
print(f"ii = {ii}")
print(f"After loop ii = {ii}")
Here’s the output:
ii = 0
ii = 1
ii = 2
ii = 3
ii = 4
After loop ii = 4
Seems like the out layers are created in this loop:
I’m not sure why there are 28 of them instead of 30.
Exactly. I’m still scratching my head about how that works, but we have enough scientific evidence to come up with a theory.
How about this as a theory: the “OP” there is the “append”, so you don’t need it on the first iteration and the last, which would give us 28? Or maybe the better way to say it is that on the first iteration you do the “append” but the compute graph is somehow clever enough to realize that is a NOP because the list is empty?
Seems like there should still be 30 items in the list, if the loop runs 30 times.
I printed the ‘t’ values just after the .append, and it runs 30 times (0 through 29).
Right, there have to be 30 items at the end, but the question is how many appends that takes. But I guess my theory still doesn’t fully make sense: if you don’t count the first one with the empty precursor as a “layer”, then you’d still end up with 29.
Hmmmmm … I’m clearly missing something here.
And the length of “outputs” goes from 1 to 30.
So creating the Model layer must do something funny with the number of TensorFlowOpLayer elements.
I agree about the 29, then maybe the last one is converted to the Dense layer.
The other thing that puzzles me here is why the LSTM layer only shows up once. Maybe that’s the nub of the issue. We need to think more deeply about how Keras interprets the compute graphs. Or to put it another way, what is the “OP” that it is describing in the OP layers.
To be continued (maybe) …
Hi Paul,
thank you for your prompt reply.
I do have Tx in range:
for t in range(1, Tx)
But that’s wrong, because it skips the first index which is 0, right? Indexing is 0-based in python, as I mentioned above. Look at the output of the for
loop I showed above and compare it to this:
for ii in range(1, 5):
print(f"ii = {ii}")
print(f"After loop ii = {ii}")
ii = 1
ii = 2
ii = 3
ii = 4
After loop ii = 4
The range(5)
version has 5 iterations, but range(1,5)
gives you only 4 iterations.
Oh I must have misunderstood this guideline then:
It worked, thank you so much! I appreciate it
The point is that 1 to T_x is a total of T_x steps, right? And what I showed you above is that range(1, Tx)
gives you Tx - 1
steps. Please examine the two examples I gave above again with that thought in mind.
This is a difference between standard math notation which is typically 1 to n, versus 0-based python indexing.
I don’t know why I got stuck on that idea of range starting from 1. Thank you so much,Paul