# Use relative error (percentage) or absolute error (percentage points) to analyze avoidable bias and variance?

In the videos Professor Ng says of avoidable bias and variance to focus on the one with the larger percentage. However, I believe he means percentage points.

For example, suppose the Bayes Error is 1%, the Training Error is 5%, and the Dev Error is 15%. Then, according to Professor Ng, Avoidable Bias is 4% and the variance is 10%.

Using percentages to compare other percentages is ambiguous however. Do we mean relative or absolute difference? So less ambiguously, we would say that there is a 4 percentage point (%p) difference between Training and Dev Errors, and 10%p difference between Training and Dev error.

However, from a relative perspective, comparing Training Error to Bayes Error is a 5x increase, or 500% increase. While comparing Dev Error to Training Error is a 3x increase, or 300% increase.

My question is, are variance and avoidable bias measured in terms of percentage points (absolute error), or percentage (relative error)? Can you give a justification why we use one rather than the other? This is example is extreme, but if in the former, you would minimize bias, in the latter variance.

Hello @eoin12345abc ,