C1 W2 A2 "Logistic Regression with a Neural Network mindset", HELP

Hello all,

so, I’ve been trying to finish this assignment for a while and I got stuck in this one specific part:

4.3 - Forward and Backward propagation

E 5 : propagate.

I keep getting an error and I don’t know what to do with it

AssertionError

I change a few things in the code and I started getting a shape not match error.

I assumed that I needed to transpose an array here or there but nothing worked .

I’m really confused about the requirements for that part.

Hi @Mustafa_H,

Please know that the Machine Learning Specialisation and the Deep Learning Specialisation are not the same, but two different specialisations.

You have posted your query in the MLS category. I’m going to move it to the DLS category, but in the future, please be aware where you are posting so that the relevant mentors can better help you then.

Best,
Mubsi

will do!

thanks for the heads up!

Hi @Mustafa_H,

Firstly, at the top of the assignment, under the Important Note on Submission to the AutoGrader, there’s a point (5), You are not changing the assignment code where it is not required, like creating extra variables.

But you have created a new variable in your Ex 5 called z.

As instructed, please don’t do that, as doing this will fail the grader.

Now, coming to your implementation of your variables A and cost in your Ex 5. Well, there are many things wrong with their implementations. Please look at the Hints in the description of the Exercise 5 and try implementing them again, as the formulas shown in the hints.

Screenshot 2022-08-17 at 4.48.45 PM

Best,
Mubsi

@Mubsi

ok, z is gone!

for A, I made it similar to how it was detailed in ex 3 for implementing the sigmoid function.

in the cost , y (i) = Y , right?

My understanding for far is that using np.sum , we won’t need to iterate through Y to multiply individual elements.
similarly, I’m using a and A interchangeably. So A would be vector with logistic reg applied to all X values.

Hi @Mustafa_H,

Your understanding about the variables is correct. Everything in the lower case is 1 of it, and everything in upper case is the vector of all of those together.

You implemented A correctly. But you see, you already implemented the sigmoid function in Ex 3, so you don’t have to implement it again here, instead, call that function.

For cost, the instructions clearly mention to use np.dot function.

You can see the changes I’ve made in your Ex 5 of the notebook for better understanding.

Best,
Mubsi

thank you very much.

I’m still a little unclear about cost implementation, though. why is (1/m) became (1.0/m)?

and why did we remove the negative sign from (1/m)?

In python 1 is not the same thing as 1. or 1.0. The former is an integer and the latter are floating point. It used to be the case in python 2.x that if m was an integer type and you wrote 1/m, you would end up with 0 because if the numerator and denominator were both integers, then the resultant type is coerced to integer. So the correct way to write that in python 2.x would have been 1./m or 1.0/m. which causes the type to get coerced to floating point. But this is not necessary any more because they fixed that “bug” in python 3.x and it no longer matters. Either way is correct.

But the minus sign is still required, since the log values are all negative, right?

1 Like