Finding accuracy of test (Naive Bayes) Ex 4

Hi, I got the “All tests passed” on this exercise, but I would still like to understand: In order to find the accuracy, why were we asked to first find “error”, the average of the absolute values of the differences between y_hats and test_y (I believe we must go over each pair of label-prediction with a loop) and then do 1 - error?

Wouldn’t it be shorter to change y_hats and test_y to np.arrays, then do sum(y_hats == test_y) and divide the result by len(y_hats)?

We learned the neat method of comparing two 1-dimensional arrays on week one’s notebook. Is there any special benefit in doing like the notebook is asking now?

1 Like

Hi @Doron_Modan

Can you specify with a screenshot of the test you are having doubt with?

Regards
DP

This one:

" # error is the average of the absolute values of the differences between y_hats and test_y"

1 Like

As long as I checked we do not use

This is perfectly mentioned by you!!

error is the average of the absolute values of the differences between y_hats and test_y
Here we implement actually convert the y_hats to array form using numpy array then subtract to test_y, to this numpy absolute is applied and then overall to this numpy mean is implemented.