Week 2 Assignment 1, Exercise 3

I’m confused on why I’m getting the following error in the 3rd exercise of assignment 1:

Test failed 
 Expected value 

 ['Conv2D', (None, 15, 15, 256), 16640, 'valid', 'linear', 'GlorotUniform'] 

 does not match the input value: 

 ['Conv2D', (None, 15, 15, 256), 16640, 'same', 'linear', 'GlorotUniform']

I double-checked my implementation several times, only to find that it seems to match the instructions provided. Has anyone else ran into this?

It looks like you specified “same” padding, but the instructions seem to require “valid” padding on that particular layer. Are you sure you checked that in your “double checking” process? I guess you can argue that your dimensions are the same as expected, so it shouldn’t matter (e.g. for a 1 x 1 convolution, the two have the same result). But the test case is being very specific about what it doesn’t like.

1 Like

Thanks for responding! Yeah, I believe I double-checked and I’m using “valid” as specified. The only portion of my code wherein this would be an option to specify is in the “AVGPOOL” portion, right? Am I mistaken? I’m assuming I’m missing something really silly.

Note that the layer in question is a Conv2D layer not a pooling layer, right? So I think you are looking in the wrong place. We’re doing computer programming here. This kind of detail matters. The difference between , and ; can spoil your whole afternoon.

I’m getting the same error and have run out of ideas.

I looked at all the Conv2D layers and I’m only using “same” padding in the two places the instructions specifically tell me to; exercise 1 component 2, and exercise 2 component 2. Everywhere else I’m using “valid” padding.

Any pointers would be appreciated.

Hi @Marc_Shepard!

Kindly post your full error.

Best,
Saif.

Oh, never mind. The bug was in Jupyter; when I restarted the kernel (rather than just re-running cells from top to bottom) the problem went away. Odd, but I’m good to go now. BTW - here was the full error (which I no longer get):

Test failed
Expected value

[‘Conv2D’, (None, 15, 15, 256), 16640, ‘valid’, ‘linear’, ‘GlorotUniform’]

does not match the input value:

[‘Conv2D’, (None, 15, 15, 256), 16640, ‘same’, ‘linear’, ‘GlorotUniform’]

AssertionError Traceback (most recent call last)
Input In [14], in <cell line: 7>()
3 from outputs import ResNet50_summary
5 model = ResNet50(input_shape = (64, 64, 3), classes = 6)
----> 7 comparator(summary(model), ResNet50_summary)

File /tf/W2A1/test_utils.py:23, in comparator(learner, instructor)
18 if tuple(a) != tuple(b):
19 print(colored(“Test failed”, attrs=[‘bold’]),
20 “\n Expected value \n\n”, colored(f"{b}“, “green”),
21 “\n\n does not match the input value: \n\n”,
22 colored(f”{a}", “red”))
—> 23 raise AssertionError(“Error in test”)
24 print(colored(“All tests passed!”, “green”))

AssertionError: Error in test

I’m happy to post my code as well if you like.

I am glad you were able to solve your problem.
Note: no need to post your code. Posting code is not allowed.

Best,
Saif.

My guess is that you ran afoul of how the notebooks work: just typing new code in a function cell and then calling it again does nothing. It just runs the old code again. You have to actually click “Shift-Enter” on the function itself to get the new code loaded into the runtime image. You can easily demonstrate this to yourself now that you know about it. So when you restarted the kernel and reran everything it made things consistent again.

Test failed
Expected value

[‘Conv2D’, (None, 32, 32, 64), 9472, ‘valid’, ‘linear’, ‘GlorotUniform’]

does not match the input value:

[‘Conv2D’, (None, 32, 32, 64), 9472, ‘valid’, ‘linear’, ‘RandomUniform’]

AssertionError Traceback (most recent call last)
Input In [35], in <cell line: 7>()
3 from outputs import ResNet50_summary
5 model = ResNet50(input_shape = (64, 64, 3), classes = 6)
----> 7 comparator(summary(model), ResNet50_summary)

File /tf/W2A1/test_utils.py:21, in comparator(learner, instructor)
16 if tuple(a) != tuple(b):
17 print(colored(“Test failed”, attrs=[‘bold’]),
18 “\n Expected value \n\n”, colored(f"{b}“, “green”),
19 “\n\n does not match the input value: \n\n”,
20 colored(f”{a}", “red”))
—> 21 raise AssertionError(“Error in test”)
22 print(colored(“All tests passed!”, “green”))

AssertionError: Error in test

When I replace GlorotUniform with RandomUniform the error shows vice versa

hi @sahermaan_712

kindly always create a new post rather than commenting older threads, you can always tag linked post which you think is similar to your issue when you create a post as I saw you posting comments on older threads.

it is for your benefit as you create new topic post for your query or issue, you have the control over saving those post, if you comment on other post, if they remove their account, even your post comments gets deleted

Regards
DP

1 Like

Are you sure that was on the same layer as the previous error? I’ll bet it wasn’t. The two initializers appear in different places, right? Both are used at different places in this model. Programming is a game of details. You have to get them all right and there are lots of them in this assignment.

@paulinpaloalto I think we were able to work past this in DM earlier this morning. See their most recent post for where we are stuck now-- The others guys are working on it.

1 Like

@paulinpaloalto actually this gives me an opportunity to ask you a question :smiley: They were using TF GlorotUniform, but the given header pulls in Keras glorot_uniform-- Are these exactly the same ?

Also, in their code they were generating their random distributions as variables, and then using then in multiple places (i.e.

IAmAVariable = glorot_uniform(seed = 0)

statement statement statement IAmAVariable
statement IAmAVariable

etc.

I told them I didn’t think this was good practice and that they should just call the distribution generator again each time, only when they need it and ‘in place’ ?

Hi, Anthony.

Sorry, I got distracted and forgot to answer this yesterday when you asked it. Well, with TF functions in general, you have to be careful and read the docpages. There are frequently different versions of the same basic function that can have slightly different properties. That is particularly true with, e.g., the various forms of cross entropy loss functions. Some of them give you more flexibility than others. So you have to read the documentation.

But even reading the documentation is a hassle, since we’re actually using TF 2.9.1 in this assignment. So what is true in the current version of TF (2.16.xxxx) may not have been true in 2.9.1. Sigh. But you can select the version of the TF docs you want and I found this page which says that glorot_uniform is just a “shortcut” for the GlorotUniform function:


You can see from the screenshot that it’s actually version 2.9.3, but that’s close enough and I wouldn’t expect something like that to change between 2.9.1 and 2.9.3.

Well, if you instantiate it only once and call it multiple places, then you will get different behavior, meaning that if you have a model that uses it in multiple places, there will basically be only one random sequence and it will be shared between multiple parts of the model. Maybe in real life this doesn’t matter, but if you’re setting seeds as they do here to get reproducible test results, then it will matter if you instantiate the initializer specifically for each point where you want to call it as part of a layer.

In other words, it probably does matter here, even if it doesn’t in general. It’s just not a good practice to “go off on your own” and start reinterpreting their instructions. Even if what you are doing is correct in some larger sense, it will usually cause trouble with the tests and the graders.

1 Like