W3_A1_Ex-5.2_Understanding math behind accuracy calculation

In Wk3, Ex 5.2, the model uses the following equation to compute accuracy:

print ('Accuracy: d' float((np.dot(Y, predictions.T) + np.dot(1 - Y, 1 - predictions.T)) / float(Y.size) * 100) + ‘%’)

Can a mentor explain the reasoning behind this equation ?

I know Accuracy calculation =

Is this equation basically accomplishing the same thing as the eq above?



Since Y is either 0 or 1, only one of the two terms gives a non-zero result for each example. The sum of the two dot products gives the sum of the number of correct predictions. Divide by the number of examples, and you get the fraction of examples which were predicted correctly.

thank you. Appreciate the explanation. That makes sense.