In the video " Carrying Out Error Analysis", Andrew explains that a manual error analysis of the 100 mislabeled examples will only lead to a 0.5% reduction in the error, and is therefore not worth the effort.
I don’t understand how that 0.5% percentage arises. If we manually examine the 100 mislabeled examples, we don’t just fix the false negatives (i.e., actual dogs identified as cats) but we presumably also fix the false positives. Therefore the manual relabeling of those 100 examples should lead to a reduction of both the false negatives (0.5%) as well as of the false positives (9.5%).
What am I missing here?