I’ve passed all the exercises but my assignment is not being submitted due to this one particular non-graded cell. Does anyone know how to fix this?
Hi @lightzzz29 ,
Although you didn’t write this part of the code, however, there is a problem with the parameters X_train and y_train that are passed to the plot_decision_boundary() function. The error reported here is the first dimension of these two matrices are not the same. So you need to find out what happened to X_train and y_train. If you put a print statement before the function call plot_decision_boundary(w, b, X_train, y_train) to print the shape of X_train and y_train, it would give you some idea what is happening. After that, you need to trace back to previous code cells to find out if they have been altered, and if they have, how.
Didn’t alter anything with X_train and y_train and the first dimensions of both of them are also the same.
Hi @lightzzz29 ,
Try restart the kernel and rerun the code cell from the beginning.
Hi, Tried it but got hit with the same error again.
Hi @lightzzz29 ,
I am not sure what your X_train and y_train are. Looking at the screenshot you posted here, this is the first call of plot_decision_boundary() in section 2.7, plotting the data from the exam papers using parameters ( w, b, X_train, y_train); the shape of X_train is (100,2) and y_train is (100,)
The second call of plot_decision_boundary() is in section 3.7, for the microchip tests. The parameters used are (w, b, X_mapped, y_train), the shape of X_mapped is (118, 27) and the shape of y_train is (118,).
Which of these plot_decision_boundary() calls is causing the problem? first call in section 2.7 for the exams or the second call in section 3.7 for the microchip test?
Sorry for the manual error in the previous screenshot. (I ran print(X_mapped, y_train) before by mistake). Here is the correct screenshot which also includes how the X_train and y_train values are.
Section 2.7 is only causing this error and section 3.7 is working fine.
C1_W3_Logistic_Regression_.ipynb (111.3 KB)
Here is the downloaded pynb version of the notebook.
Hi @lightzzz29 ,
I couldn’t figure out why X_train got changed. If you have a look at the print out, you can see X_train has a trailing set of values which is exactly the same as y_train. How could that be possible? and how could the first dimension of X_train becomes 2 as reported in the ValueError message?
I ran my lab on the Coursera platform, and I have no problem. Unless you are running your lab outside of the Coursera platform, which I have no way to replicate the environment, and not able to help.
I’m running the code on coursera platform itself. And I’ve no idea how the dimensions are being altered but it should be due to something inside the non-grader cell function right?
I suggest you take a fresh copy of the assignment and rewrite the code…
I have the same utility functions, and I have no problem.
I am away from my desk. If you don’t know how to get a fresh copy of the assignment, just do a search on the forum.
Hi, I tried it on a fresh copy of the assignment too, but the error didn’t change.
I have gone through this notebook and was surprised to see how you are making everything complex for yourself. Exercise 1, Sigmoid
, is one line code and you did it in 14 (also incorrect). No matter if you passed the unit test but your code is wrong. Similarly, your compute_cost
is also incorrect. Maybe your compute_gradient
would also be wrong but I didn’t check every line as you are making your code verbose.
I wonder why users make mistakes in MLS assignments when hints and more hints are given that show the exact solution.
I understand my mistake in Exercise 1 but can you tell me why Compute_cost is incorrect? Even in the hints, it’s almost similar except that I’ve used vectorized implementation and hints used another “for” loop. I’m a beginner and thanks for correcting.
I remember these two incorrect lines:
z = np.sum(z) + b
total_cost = total_cost[0]/m
Maybe there are some more, but I don’t remember.
- Just use np.dot on w & x and add b, don’t need to do np.sum and then add b.
- Ask yourself, why you are using total_cost[0]? What is it?
Hi @lightzzz29 ,
Your code is more complicated than is necessary as Saifhangerngr has said. To track down what the problem might be, we need to trace where X_train is used. I can see the X_train is passed to gradient_descent() which then call compute_cost() and compute_gradient(). If you print out the last 10 values of X_train at the start and end of the compute_cost() and compute_gradient(), then you will be able to see if X_train has been altered.