Machine learning is more prone to overfit and human learning is more prone to underfit, comment plz

Here I completely agree, @Juan_Olano: adding features based on the same number of labels resp. increasing the model complexity, e.g. by adding new features / dimensions, will clearly help to reduce underfitting. However, I would argue: this is rather due to the higher complexity of the model.

Where I understand @mehmet_baki_deniz’s point quite well:

if the model complexity is limiting, adding data would not help too much to reduce underfitting, at least not in this example - let’s assume:

  • a linear regression model
  • ground truth labels follows a sin(t) behaviour for two full periods
  • assuming you have 100 labels for this

Increasing the 100 labels to 1000 labels would not help. The limited capacity of the 1D linear model parameters (only bias and weight as parameters) is the limiting factor. Chose a nonlinear domain model or AI /data-driven model such a Gaussian Process with the right kernel and it will work way better also with <<100 data points only to tackle underfitting.


Source

Just some input! :slightly_smiling_face:

Best regards
Christian

2 Likes