This is regarding:
/notebooks/C1_W3_Logistic_Regression.ipynb
I passed all the unit tests but when I submit the programming assignment I get the following error response regarding cell 18 (which is hardcoded by Deeplearning.AI): " Can’t compile the student’s code. Error: ValueError(‘x and y must have same first dimension, but have shapes (2,) and (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2)’)"
Not sure what to do since cell 18 is not editable by students.
Can you share screenshot of the error you are getting for cell 18. Please share an image including cell 18(as it seem according to your description a test cell) and the error encountered.
Deepti- Thanks. See attached screenshot of Cell 18 (test cell with code supplied by Coursera), on right side of screenshot you will see that the grader outputs an error associated with that cell. The screenshot doesn’t capture the entire output message so I’ve copied it as text here:" " Can’t compile the student’s code. Error: ValueError(‘x and y must have same first dimension, but have shapes (2,) and (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2)’)"
See below image as the screenshot shared by you for cell #18 and before that cell is also a non grader cell for checking optimal parameters of a logistic regression model by using gradient descent. So for this exercise you had to only run the cell. But the very next line mentions Assuming you have implemented the gradient and computed the cost correctly, your value of J(w,b)should never increase, and should converge to a steady value by the end of the algorithm.
Check if you didn’t not hard code the dj_dw as mentioned by other mentor also. You can cross check (by making sure you didn’t add any extra code path for recalling dj_dw other than already given by the autogravder, also you can refer hints for better solution)
If point 1 was done correctly, then check if you didn’t hard code total cost in the def compute cost grader cell especially while implement loss and loss_sum.
I am guessing the issue is more with point 1.
Once you debug, let me know if issue is resolved or further assistance is required.
Thanks, Deepti- Still having this issue. I didn’t hard code output for J(w,b) or the gradient functions that I had to implement as part of this exercise. And as mentioned both of these passed the gradient descent unit test with cost decreasing each iteration until converging to the specified target minimum. I do realize though that I had not run one of the test cells (#22) that plotted the output decision boundary against a scatter plot of training data. It is this cell (rather than cell 18 specified by autograder) that actually generates the x,y dimension error I had shared in my previous message. For whatever reason test cell 22 (which I can’t modify) fails at plotting the decision boundary line using the w,b parameters supplied by the gradient descent procedure (also a test cell that I cannot modify). I’ve attached the screenshots with the error details.
Regards
Can you share screenshot of the gradient and compute cost codes. Share a screenshot rather than copy paste. Please share via personal DM as it is against community guidelines to share on a public post thread. Click on my name and then message.
Deepti/TMosh- Thanks for your help. I figured out the issue, I had inadvertently changed the z array into a numpy ndarray in my sigmoid function implementation. This must have resulted in downstream effects that just caused the boundary plot test cell to throw an error