In Exercise 3, to create the gradient for logistic regression, my code is running without error, but when I am checking the implementation, the results are different from the expected results. I checked the hints to identify any logic issues, and I have checked for indentation errors, but I can’t figure out where I am making a mistake.
There seems to be a problem with how the gradients dj_db and dj_dw are calculated in your implementation. Make sure that all training examples (m examples) are summed correctly. The difference in magnitude between your output and the expected output suggests that there may be an issue in summing across all examples. If you used loops, make sure the gradients are summed correctly. If you used vectorized operations, ensure the broadcast is correct and the operations are applied correctly across all examples.
Thanks for the response. I had made an indentation error, which caused the gradients to be divided by m every time the loop ran. Thank you for the assist. The code is working fine now.
2 Likes