When I watch the lecture on error analysis on the Advaced Learning Algorithms courser week 3, the question occurred to me: how can we apply error analyisis on regression problems?
In a classificaition problem, we can identify the misclassified data points more easily, naming 0/1 misclassified.
However, in a regression problem, how can we quantify the errors in each data point? For regression problems, we try to find alogrithms to minimize MSEs. In this case, it means that any data points in either validation set or test set have some errors possibily. How can we identify big error samples? And if so how can we relate them to some common traits?
The main goal is to make predictions, indeed. What if the prediction is not so good?
For example, for some data points in validation set, the errors are so huge. Can we examine these data points (e.g. outliers) and identify some common traits of them? If so, what is the standard to select them?
I know we can do feature importance to select important features. But from my understanding, it is another topic right?
What I usually do for error analysis in regression problems is to see the difference between the prediction and the actual results and identify what observations are far away from the prediction, there is no standard way to do it as it will depend on your problem, however you can follow the process of plotting the results against actual values and iterate several times, the deviation will change as you improve on the results of your model.
For instance, if you are predicting housing price and the mean error rate is $5000 and one of your houses has an error rate of $10000 you might think something is happening, but in the next iteration as your lower your error rate by improving your model you might end up with a mean of $2500, in this case a house that deviates to $5000 might suggest something is happening with that prediction.
Couple of thoughts. First, make sure you’re familiar with the standard regression metrics available out of the box from Keras. Sounds like you are, but here’s the list:
Second, there are additional metrics directly available from Tensorflow that can provide additional insight into your model’s performance, such as
Finally, though I don’t think it is available out of the box, by writing a custom loss function and /or callbacks you could collect the Y - \hat{Y} values each iteration and then study the least accurate training examples.
Thanks for your answer, ai_curious!
Yes, the MSE and R^2 can evaluate the ovallall quality of the model.
And writting a custom loss function for the least accurate training examples would be a good idea.
I hope there are some tools or standard regression metrics to evaluate the specific examples. Maybe outliers in the training examples can give indications of high losses in the first place.
Examining obvious outliers in the training values is a good practice. However, I am as intrigued by the training input values that don’t appear to be outliers, but for some reason their features cause inaccurate predictions. The custom loss function could help you identify which training samples are worse than or drive the metrics, which generally deal with averages, right?