Issue regarding grading

The below screenshot shows my notebook (C4 W4) printed the MSE and MAE results from an optimal learning rate & optimizer that I discovered:

As you can see that the passing criteria are to achieve an MSE of 6 or less and an MAE of 2 or less. Unfortunately, the auto-grading displays different outputs. The graded MAE is 5.25, and the MSE is 39.76.

Is there a bug in the grading system?

I’ve been mentoring in the various DeepLearning specializations for over 5 years, and the number of times when it turns out that the problem is in fact a bug in the grading system is very, very small. Much more likely is a procedural mistake like submitting an old model (one that didn’t pass the local tests, was revised, but not saved and submitted). Or a coding bug like hard coding some parameters so that it works when called in your local code but fails to meet thresholds when run with different parameters or data. Remember, passing local unit tests means only that, you passed local unit tests. It doesn’t prove the code is ‘correct’. Treat the fact that it runs one way in one environment but differently elsewhere as your first clue :man_detective:

You should also try searching…

and look especially at threads showing a :white_check_mark:

Notice that this one reports exactly the same failed grader outputs you do

The grader tests your code using a totally different set of data.