I noticed during the avoidable bias video that if we want to reduce bias, wouldn’t we have to reduce the variance afterward?
Example:
Human error: 1%
Training error: 8%
Dev error: 10%
If the training set error is dropped to say 1.5%, we now have a large variance problem, so we will need to sort that out. Right?
Sure, but you don’t know what the new dev error will be after you finish removing the “avoidable” bias. If it doesn’t change, then, yes, you definitely will have a significant overfitting problem that you’ll need to deal with. But you’ll need to do some serious changes in order to get the training error from 8% down to (say) 1.5% and there is no reason to expect that everything else will be invariant under that set of changes.
So let’s take one step at a time …