Thank you for the answers.
Dear pastorsoto, your answer came as I wrote the text below, so I will send it anyhow, yust to show you my previous conclusions:
In the lection “C2W3 Diagnosing bias and variance” two possible extreme situation are mentioned. High bias (underfit) and High variance (overfit). And in the middle, of course, the “Just right”:

In the training lab "C2W3_Lab_02_Diagnosing_Bias_and_Variance", chapter “Fixing High Bias”, subchapter “Try adding polynomial features”, there is the sentence:
Blockquote
As you can see, the more polynomial features you add, the better the model fits to the training data. In this example, it even performed better than the baseline. At this point, you can say that the models with degree greater than 4 are low-bias because they perform close to or better than the baseline.
Below is the screenshot:
…
In the same lab, chapter “Fixing High Variance”, subchapter “Try increasing the regularization parameter”, there is the sentence:
Blockquote
In contrast to the last exercise above, setting a very small value of the regularization parameter will keep the model low bias but might not do much to improve the variance. As shown below, you can improve your cross validation error by increasing the value of 𝜆 .
Below is the screenshot:
…
Looking at the first screenshot I tend to see model well performing on the training set when increasing the degree, but the CV shows increasing of MSE. And that is overfit (High variance), right? In the text the Low bias is mentioned.
The second screenshot is about increasing the value of 𝜆. Increasing the lambda tends to cure the overfit (High variance). Right? The text says “As shown below, you can improve your cross validation error by increasing the value of 𝜆.”
So, what does “Low bias” exactly means?
And, is there “Low variance” too, meaning what?
Thanks