DLS Course 1 week 3

I started the assignment for week 3 and there the shape of input layer is 5 and that of output layer is 2. Shouldn’t it be 2 and 1 respectively since even in the diagram the input layer is shown to have 2 neurons and output has only one.


That is just a test case for the layer_sizes function. It is a general piece of code that should work for any size of inputs, right? There’s no rule that it only has to handle the dimensions that happen to agree with the “real” data.


Actually in the expected output it showed 5 4 2 whereas it should’ve been 2 4 1 since I had to use X.shape[0] and Y.shape[0]. But after reloading the notebook all tests passed


I have absolutely the same problem.

I wouldn’t agree with @HarshulGaba about Y.shape[0] though. But I do agree that the size of the input layer must be equal to 2.

See the slide number 35 called “What happens if you initialize weights to zero?” from the week 3 lectures. There, it is explicitly written that n^[0]=2. In other words, the size of the input layer equals to the number of the features. Following the same logic, we assign 4 - the number of the neurons - to n_h in exercise 2 of the programming assignment for the week 3.

What surprises me most of all is that the size of the input layer is 5 in the expected output section. With my own eyes I do see only 2 features in the input layer.

I would have insisted that’s a mistake in the assignment hadn’t I read the comment by @paulinpaloalto .

Dear @paulinpaloalto , your comment seems rather confusing to me. Are you suggesting that the tests for the exercise 2 are written for some random input layer and I must be extremely lucky to have got the randomly generated input layer be equal to the input layer we are given in the assignment in order to pass the tests?

Thank you.


P.S. Reloading the notebook doesn’t help to pass the test.

I think you are confused by the assumption that all cases have the same number of features. That is simply not true. It depends on what the inputs are. When Prof Ng is giving a lecture, he may well not be talking about the specific data that happens to be used in the Week 3 Planar Data assignment.

As I commented earlier, the test case for layer_sizes has nothing to do with the “real” data in this assignment. It is a general purpose function. They are just writing a test case that checks your logic. If you fail the test, that means your code is wrong.

1 Like

I already figured the solution.
Thank you and I’m sorry for bothering you.
P.S. I provided my explanation here. It is the same as yours.

Look at the logic of the test case: it supplies its own version of X and Y that have nothing to do with the “real” data. They are the variables t_X and t_Y that end up being passed to layer_sizes to conduct the test. It is just a “unit test” for that function which is self-contained. It is always a mistake if you are referencing global variables in the scope of your functions.

1 Like

I’m sorry, my bad. I admit the mistake is utterly silly. I’m sorry for bothering people with this.

Your posts are quite helpful since I had the same questions. The global variables X_shape and Y_shape in exercise #4 is indeed quite confusing.