Accuracy formula

Could you please expand on from where we got accuracy of logistic regression formula?? There is no mentioning in lectures… it looks the same as loss formula excep there is no log operator in it…
I do appreciate your reply !!!

Would you please give us a reference to the point in the lectures or in the assignment that you are asking about?

In general “accuracy” means something completely different than cost. You are measuring how many correct predictions the algorithm (LR or anything else) makes on a particular set of labeled data. To do that, you first compute \hat{y} for all samples, which gives you numbers which are the output of sigmoid, meaning that they are between 0 and 1. To convert that to a prediction, we treat the answer as “yes” if the \hat{y} value is > 0.5. One way to do that would be:

p_i = round(\hat{y_i})

Then if we take:

errs = \displaystyle \sum_{i = 1}^{m}|p_i - y_i|

that should give us the number of cases in which the prediction p_i does not match the label y_i. Then you compute the accuracy as the average error over the number of samples subtracted from 1:

acc = 1 - \displaystyle \frac {errs}{m}

Or if you prefer, you can multiply it by 100 to express it as a percentage.

If I’m answering a different question than you intended, please let me know and we can discuss more. :nerd_face:

Hi! Thank you so much for feedback!
But I mean this piece of code from W3 assignment (part 3 Simple Logistic regression):
print ('Accuracy of logistic regression: d ' float((, LR_predictions) + - Y,1 - LR_predictions)) / float(Y.size) * 100) +
'% ’ + “(percentage of correctly labelled datapoints)”)

That is just another way to express the same computation that I showed above. In that formulation, Y is a vector with the labels and LR_predictions is the equivalent of what I called p in my formulas.

So let’s write it this way with Y and P as vectors:

Y = (y_i) for i = 1 to m
P = (p_i) for i = 1 to m

Each of those values are either 1 or 0, so what happens if I take this dot product:

A_{pos} = Y \cdot P

What that dot product actually means is this:

A_{pos} = \displaystyle \sum_{i = 1}^{m} y_i * p_i

So those individual y_i * p_i values will be 1 only if both y_i and p_i are 1, right? So that sum is the number of correct predictions for the case that y_i = 1.

Now go through the same reasoning about:

A_{neg} = \displaystyle \sum_{i = 1}^{m} (1 - y_i) * (1 - p_i)

and it should be clear that gives you the number of cases in which the correct answer (the label y_i) is 0 and the prediction is also 0.

So A_{pos} + A_{neg} is the total number of correct predictions. The rest is just converting it to an average expressed as a percentage.

1 Like