Week 3, Planar_data_classification_with_one_hidden_layer, bad test case?

Hi there everyone. I’m working through the assignment for week 3, and running into a test case that I believe is an error. Section 4.1 has us outputting the layer sizes for our little network. Earlier, we find the shape of X is (2, 400), with two input features, and four hundred samples. With two input features, the input layer should have two nodes; the diagram at the start of section 4 shows this as well. As a binary classifier, we only need one output node. So the shape should be (2, 4, 1). This is what my code outputs, but for some reason the test case claims it should be (5, 4, 2): “AssertionError: Wrong result. Expected (5, 4, 2) got (2, 4, 1)”. Is this a holdover from a previous version of the notebook, or am I missing something?

The following exercise’s test case uses 2 input features as well. Chalking this up as a hold over from a previous notebook.

You’re just missing the point that the routine layer_sizes is a general purpose function, meaning that it should work with any parameters and they don’t have to match the particular data that we happen to have here.

We always strive to write general or “reusable” code here. What’s the purpose of even formulating that logic as a separate callable function if you’re just going to hard code everything to size 2? Suppose the next dataset that I want to play with has 4 input features or 12288?

Hi @paulinpaloalto,
Just to clarify, my function uses X.shape to get the shape of my X tensor, and then I select the appropriate index; I’m not just hardcoding numbers. The issue I’m pointing at is the assert statement in the test is outdated. Not sure why we’re jumping to animosity right out the gates, but you do your thing. It’s a small bit of code; you could try it out in the notebook and see if you get the same result. Just calling out a potential issue to be helpful.

Hello @Matt_Reed,

Just sharing what I am seeing in my notebook.

The test case uses another set of variables t_X and t_Y. Below are their shapes:

It should thus be 5, 4, 2.

If your t_X and t_Y have different shapes, then perhaps you might want to get an updated version of the testCases_v2.py script file since it is where the layer_sizes_test_case() function is defined.

If your test code cell is different from mine, then perhaps the notebook needs to be refreshed.


Hi, Matt.

My apologies if the tone of my response did not come off correctly. I was not trying to express animosity. It’s just that these courses have been in operation for more than 5 years at this point and this question about why the sizes on that test case don’t match the real data comes up at least a couple of times per month. So I just typed away without thinking about it too much and it looks like your issue may be more subtle than the usual case.

I did not have any trouble getting the test cases to pass here. You can actually examine the test case code for yourself by clicking “File → Open” and then opening the file public_tests.py. But I’ll save you the trouble and here’s what you will find:

def layer_sizes_test(target):
    X = np.random.randn(5, 3)
    Y = np.random.randn(2, 3)
    expected_output = (5, 4, 2)
    output = target(X, Y)
    assert type(output) == tuple, "Output must be a tuple"
    assert output == expected_output, f"Wrong result. Expected {expected_output} got {output}"
    X = np.random.randn(7, 5)
    Y = np.random.randn(5, 5)
    expected_output = (7, 4, 5)
    output = target(X, Y)
    assert type(output) == tuple, "Output must be a tuple"
    assert output == expected_output, f"Wrong result. Expected {expected_output} got {output}"
    print("\033[92mAll tests passed!")

So you can see that there are two test cases, one that generates output of (5, 4, 2) and one that is (7, 4, 5) to test the full generality of the implementation.

So if your code fails any of those tests, then the question is why. The method you describe of referencing the “shape” attribute of the local parameter X within the local scope of layer_sizes sounds correct to me. So there must be something else going on here …

Please let us know if what I said above is not enough of a clue to figure out the nature of the problem. If need be, we can share solution code, but through the private DM channel, rather than just posting it on a public thread.

Hi @paulinpaloalto and @rmwkwok,
Thanks for your responses guys. I appreciate your patience and help, here. And apologies to you Paul. I should have done a better job clearly presenting my issue; your conclusion was completely understandable, and I didn’t set the conversation up for success. I’ll make sure to include code snippets in the future to reduce ambiguity. Any additional suggestions are welcome, as well.

Thanks for confirming that the test case was functioning appropriately for both of you; that obviously points to a “me” issue. Ends up I fooled myself into thinking I had used the .shape method in my function, when I was actually using the variables I had set earlier in the notebook, shape_X and shape_Y. Sounded the same in my internal monologue and slipped right past me. The rest of the notebook went without a hitch, so I reasoned that it was some kind of versioning issue. Glad you two were able to set me straight on the matter! Thanks again for taking a look with me. Cheers

Please note that sharing your code is against the community code of conduct. You can share your full error and it is (in most cases) enough to find out the bug. However, if mentors need to see your code, they will ask for a private message.


Great tip; thanks @saifkhanengr

Just wrapped up the specialization; thanks for the help, guys! I enjoyed the program a lot and appreciated having the space to turn to whenever I ran into issues. Cheers

Congratulations on completing everything! Best of luck on your continuing DL adventures. There are also general topics here for non-course specific discussions, so keep in touch!

Thanks @paulinpaloalto! I will definitely do so!

Hi All, I just came across this issue and jumped to the same conclusion that there must be a mistake in the test case.

I think the root cause of the issue in this specific case is that there is a bit of ambiguity in the exercise description. The layer_sizes function is introduced in the context of the specific Neural Network shape described just above, and there is no explicit guidance that the function should be designed to support different NN shapes.

As such it’s easy to conclude that the function is something of a throwaway where the dimensions can be hardcoded, rather than derived from the matrix shapes. There is a hint along these lines, but I took this to be a general statement about the X and Y in the context of the exercise, rather than the parameters of the function.

Either way thanks for the responses - all good now.

1 Like