Please know that the Machine Learning Specialisation and the Deep Learning Specialisation are not the same, but two different specialisations.
You have posted your query in the MLS category. I’m going to move it to the DLS category, but in the future, please be aware where you are posting so that the relevant mentors can better help you then.
Firstly, at the top of the assignment, under the Important Note on Submission to the AutoGrader, there’s a point (5), You are not changing the assignment code where it is not required, like creating extra variables.
But you have created a new variable in your Ex 5 called z.
As instructed, please don’t do that, as doing this will fail the grader.
Now, coming to your implementation of your variables A and cost in your Ex 5. Well, there are many things wrong with their implementations. Please look at the Hints in the description of the Exercise 5 and try implementing them again, as the formulas shown in the hints.
for A, I made it similar to how it was detailed in ex 3 for implementing the sigmoid function.
in the cost , y (i) = Y , right?
My understanding for far is that using np.sum , we won’t need to iterate through Y to multiply individual elements.
similarly, I’m using a and A interchangeably. So A would be vector with logistic reg applied to all X values.
Your understanding about the variables is correct. Everything in the lower case is 1 of it, and everything in upper case is the vector of all of those together.
You implemented A correctly. But you see, you already implemented the sigmoid function in Ex 3, so you don’t have to implement it again here, instead, call that function.
For cost, the instructions clearly mention to use np.dot function.
You can see the changes I’ve made in your Ex 5 of the notebook for better understanding.
In python 1 is not the same thing as 1. or 1.0. The former is an integer and the latter are floating point. It used to be the case in python 2.x that if m was an integer type and you wrote 1/m, you would end up with 0 because if the numerator and denominator were both integers, then the resultant type is coerced to integer. So the correct way to write that in python 2.x would have been 1./m or 1.0/m. which causes the type to get coerced to floating point. But this is not necessary any more because they fixed that “bug” in python 3.x and it no longer matters. Either way is correct.
But the minus sign is still required, since the log values are all negative, right?