# Logistic Regression Python code not working correctly

I tries using the dataset provided in Logistic regression assignment in https://www.coursera.org/learn/machine-learning this course to create a model but it is not working correctly. I have changed the value of learning rate several times but it was still not working. I think there is some problem with the logic and and I am not able to fix it. Please help

Logistic regression.ipynb (89.2 KB)

Zip file containing data

It’s great that you are trying to apply what we learned in Week 2 here to some other problems. You always learn a lot when you try something beyond the course materials. I haven’t downloaded your notebook or data yet, though. Let’s start with some general discussion and see if that helps. I also took Prof Ng’s Stanford Machine Learning course a few years back and it’s really great, as you would expect! But the thing to realize is that course was published in 2011 and the current courses were first published in 2017. In that period of time, there was a lot of change in the field. Prof Ng made some pretty significant changes to his notation and approaches, although you see it more when you compare the Neural Net material in Week 3 and Week 4 of Stanford ML to what he does here. But even in the Logistic Regression case, there are some changes. The biggest one is the orientation of the data. In the Stanford course, the X input matrices have the number of samples as the row dimension, but here it is the column dimension. In other words, the data you got from Machine Learning has X with dimensions m x n_x, where m is the number of samples and n_x is the number of features. Our code here is setup to expect X to be n_x x m. Are you sure you took that into account in your code?

Yes, I have changed some things according to the data. I also looked at the plot given in the pdf so compare the plot made using the data. I have noticed some changes in notation in the recent course and the old course. The cost given in pdf with all parameters zero is also similar. My code is somewhat poorly written so it can also be possible that it might be a programming error but I am currently suspecting wrong approach/logic if not the learning rate.

I took a quick look at your code and it’s different than what I was expecting. You’ve basically taken the approach from Stanford Machine Learning and translated it into python. Sorry, but I don’t think that’s the interesting way to approach this. What I would suggest you do is take the python code we developed here in Week 2 of DLS Course 1 and then apply that to the data from the other course.

The other thing I remembered after looking at the data is that another thing Prof Ng changed between that course and this one is that the bias units are separate now. That’s why there’s the first row of all ones in the input data from the old course.

I don’t think you need numpy “vectorize” in order to get a vectorized implementation of sigmoid. The way you wrote it with np.exp is already polymorphic and vectorized.

In your `computeCost` function, why call the hypothesis function twice? It would be more efficient to call it once, store the value in a variable and use that twice.

Your function is defined as `logisticRegression`, but you are calling `linearRegression`. I don’t find a definition of `linearRegression` in your notebook.

Yes, I did the logistic regression according to the approach in that course but in Python.

Thankyou, I didn’t know it is already vectorized.

I didn’t think about it. Will store it in a variable.

I think the notebook I shared has this mistake. It is logistic regression but I think I wrote linear regression by mistake. Sorry about that.

I am also trying out what we learnt in Week 2 on same data. I hope that will work for me.

Thank you for checking it.