Carrying Out Error Analysis and the "0.5% ceiling"

In the video " Carrying Out Error Analysis", Andrew explains that a manual error analysis of the 100 mislabeled examples will only lead to a 0.5% reduction in the error, and is therefore not worth the effort.

I don’t understand how that 0.5% percentage arises. If we manually examine the 100 mislabeled examples, we don’t just fix the false negatives (i.e., actual dogs identified as cats) but we presumably also fix the false positives. Therefore the manual relabeling of those 100 examples should lead to a reduction of both the false negatives (0.5%) as well as of the false positives (9.5%).

What am I missing here?

I could definitely be missing something – but I think not all 100 mislabeled examples are those of dogs. In other words, of 100 mislabeled examples 5 were dogs, the others might be ‘big cats’ or ‘plush toys’ or false negatives. So fixing the dog problem is 0.5% improvement.

Also, “dogs id’ed as cats” is a false positive (not false negative).