Hi, this is my first post, all the previous times I got stuck I was able to fix it reading the posts here on Discourse but i have been stuck for quite a while with this one so im giving up.
I keep getting this error when running the # UNIT TEST of djmodel
Test failed at index 2
Expected value
[‘Reshape’, (None, 1, 90), 0]
does not match the input value:
[‘TensorFlowOpLayer’, [(None, 90)], 0]
AssertionError Traceback (most recent call last)
in
1 # UNIT TEST
2 output = summary(model)
----> 3 comparator(output, djmodel_out)
~/work/W1A3/test_utils.py in comparator(learner, instructor)
24 “\n\n does not match the input value: \n\n”,
25 colored(f"{b}", “red”))
—> 26 raise AssertionError(“Error in test”)
27 print(colored(“All tests passed!”, “green”))
28
AssertionError: Error in test
Many thanks for any help or advice. I did not post my code on purpose.
You can see what it is doing there, right? It’s got a list of the layers that should be part of your model. Here are the first few lines of how they expect it to look:
So it looks like you end up having two copies of the second layer for some reason, so you get that mismatch. So how could that happen? Are you sure you used the reshaper function that they predefined for you, instead of trying to write out the Reshape yourself?
Hi thanks for the response, so the hypothesis is that i have two instances of Layer 1 (second layer)? For Layer 2 I call “reshaper” on variable “x” I did not implement the Reshape command myself. Thanks for the help.
The other thing to check is that you are using the correct input variables at each layer (the output of the previous layer). The way it prints out the model is based on the “computation graph”, so if the connectivity of the graph is not correct that can also cause this sort of issue.
Hi, thanks for both of the suggestions. I must have made a mistake but as far as I can see the reshaper call is in the correct step and I do feed the the previous outputs into the next inputs. But i must have overlooked something. Il’l keep looking.
In the meantime I could fix my issue. It was a missing parameter ‘initial_state=[a,c]’ in the LSTM layer.
I found this by reading some posts around here .
The more assignments I’m trying to solve in this course, the more uncomfortable I feel with the topic. At least at the moment it seems next to impossible for me to develop my own models.
The amount of errors I’m running into solving the assignments is quite high and mostly not solvable for me just by reading the keras or tensorflow documentation.
Glad to hear that you found the solution to this particular problem. It is definitely the case that the level of complexity keeps growing as we move through all these courses. ConvNets are way more complicated than the Feed Forward networks we learned about in Course 1. Then Sequence Models are more complicated than ConvNets: many more types of networks. And then you’ll learn about Attention Models in Week 4. So there is a lot of complex material to master, but stick with it. There is no shame in watching a given lecture more than once. I frequently find that I thought I understood it the first time, but then in the next lecture realize that I must have missed something and have to go back and listen to the earlier one again. Hang in there and see how you feel after getting through the whole course.
I have the exact same impression. I barely understand the steps of the assignments in this last course. If I am not able to understand the steps, it is extremely unlikely that I can adapt them to real projects. I don’t give up though
Yes. I completely agree. While Andrew’s lectures are nice to understand, the programming exercises are more of python related challenges and figuring out what goes where, rather than enjoying a higher level understanding of the entire process. Course 1-4 were relatively a lot better.