Problem 2 with Supervised Machine Learning week3 in lab C1_W3_Logistic_Regression

Problem 2: In the Supervised Machine Learning Week 3 practice lab on Logistic Regression, in cells 27-29, we are told to just run the code without altering it, and even that gives incorrect results. The expected output does not converge, and the boundary line in cell 29 is very far from where it should be.
Expected result:


Actual result:
Decision Boundary from code
Remember, this is the code supplied by Coursera, with no changes by the user.
I have attached my Jupiter notebook
{moderator edit: code removed}

Please don’t share your code on the forum, that’s not allowed by the community standard.

As I replied on your other thread, your values for w and b are not correct. That’s why the boundary is not in the correct location.

To get the correct graph, you have to implement the code for compute_gradient() correctly.

The two cells after compute_gradient() have tests for the dj_dw and dj_db values. Are you getting the correct results?

As I replied in the other thread, I implemented the code EXACTLY as directed in the hints. And my code PASSED the tests. So the problem is not in my code, there is something in their implementation that is wrong.

Please post a screen capture image that shows your results for the two compute_gradient() tests.

OK, thanks
here is the compute_gradient test with w & b initalized to zero:

The compute_gradient test with w&b initalized to non-zero:

The compute_gradient_reg test with non-zero w&b:

Two points to start with:

  1. Make sure you do not rename your notebook. That will make the grader unhappy.
  2. Python is extremely picky about how you indent your code.

So, your compute_gradient() function does not return the correct values for dj_db and dj_dw, for either test. See your results of Cell [14], for example. Those aren’t the correct results.

None of the rest of the assignment is going to work correctly until you fix that.

Here are some things to check:

  • does your sigmoid() function pass its tests?
  • did you use the sigmoid() function when you computed f_wb?
  • did you use the hints for the compute_gradient() function?

In that image, I’ve drawn arrows at the three lines you need to complete. There are hints for all three.

Note that if you’re good at matrix algebra, there is a much simpler way to compute the gradients, which doesn’t require complicated for-loops. You only need to use your sigmoid() function, along with np.dot() function a couple of times, and np.sum() once.
It’s only three lines of code.

  • The first one computes f_wb.
  • The next two lines come straight from the matrix implementations of these equations:

dj_db is a summation of the difference of two vectors:
dj_dw is summation of the product of two vectors. That’s a dot product.

One further hint about indentation:
Here is an area that students often overlook. Check the indentation in this area very carefully. The division by m should only happen once - it is outside of the for-loop over ‘i’, and just before the return statement.

OK, thanks. Yes, it was the fussy indentation in Python. The final calculations of dj_dw and
dj_db were within the for i in range(m) loop. Once I moved them out, everything worked fine. Thank you.

can you tell what is different in my code from the explanation by Tmosh or your own code…Kindly help me to rectify this
{code removed by mentor}

Hi @Syed_Asghar_Abbas_Na

The hint asked you to “calculate the gradient from the i-th example for j-th attribute”. Apparantly, you didn’t completely follow the hint.

Raymond

PS: I am removing your code since we can’t share it here.