Hi, in the week 1 programming assignment on regularization, there is a mistake in the check statement following cell number 24. In cell number 24, the correct initialization of D1 is as follows “D1 = np.random.rand(20,X.shape[1])”, and this gives an error in the succeeding check statement, but gives no error for the rest of the cells. Please correct that check statement.
Hi Mubsi, PFA the screenshots of the line causing the error, and the error, a screenshot of how the rest of the code works with this error. Just the check statement for this particular line is causing the problem.
(Solution code removed)
If i replace the 20 in the line “D1 = np.random.rand(20,X.shape[1]” to 2, that corresponding piece of code works, but the rest of the code shows up an error.
(Solution code removed, as posting it publicly is against the honour code of this community, regardless if it is correct or not. You can share the errors you get)
Please help!
There is no mistake in the test case. The mistake is in your code. You hard-coded the shape of D1
and D2
to have a fixed number of rows. It is given that:
In lecture, we dicussed creating a variable d^{[1]} with the same shape as a^{[1]}.
So, your D1
and D2
should have the same shape as A1
and A2
, respectively.
Thank you! that solved the problem! But there is still an error in the code for the check statement/test case, there is no reason why hard coding should not work. The shape of A1 in the testcase is incorrect. Kindly check if you are interested.
Sorry, but you are just wrong here about hard-coding. There is no reason to hard-code any of the dimensions here. The only reason to hard-code is if you literally have to. Otherwise you are purposely writing code that is not general. Why would you do that unless you were literally forced to?
So what do you mean that the shape of A1 in the test case is incorrect? Because it doesn’t agree with the “real” data being used here? But the point is we are writing code that can be used with any data, not just the specific data we happen to have here. That’s what I mean by “general” code. Or you could call it “reusable” code. Tomorrow you could reuse it with data having different dimensions.
Hi @Karthik_Mishra,
Thank you for sharing the screenshot.
However, as it turns out, you are writing hard-coded values. Nothing wrong with the code or tests we have provided.
Please follow the instructions as provided from the mentors to overcome this.
Thanks,
Mubsi
Hi Paul, Thanks for your reply. I understand what you are saying about the advantages of not hard coding, and I fully agree with what you say. And removing the hardcoding did solve the failure of the test case that i was facing. However to make my point clear, certain values of the neural network we are currently building are already hardcoded. To prove my point, look at the description of the function I am working on, in that it clearly specifies the shape of W1, b1, W2, b2, W3, b3. So the number of layers, and the number of hidden units per layer are fixed. This also fixes the shape of A1, A2, and A3. That is the reason I said that there is no reason hard coding should not work. I hope I was able to better explain my point, however i could be wrong. But nonetheless thanks for your time!
Thank you @Mubsi ! I followed the instructions shared by the mentor and it did solve the problem… Thanks!
The only dimension in a NN that must be hard-coded is the number of units in each hidden layer. This is because those are decisions made in the design of the NN model. Everything else is controlled by the size of the input data X and the number of outputs Y.
The example of hard-coding you reference in the forward_propagation_with_dropout() function (in the doctext) is - frankly - needlessly confusing.
None of the values I’ve circled should be shown as integers here, they should (as is done is every other assignment in the Specialization) be shown using variables.
That would help eliminate this confusion.