Description (include relevant info but please do not post solution code or your entire notebook):
I have submitted my programming assignment for multiple times and the score was always 0. The ‘show grader output’ section has below information:
“There was a problem compiling the code from your notebook, please check that you saved before submitting. Details: name ‘series_valid’ is not defined”

However, all the functions have all tests passed within the notebook. Please help.

did this grader output message come in general for a particular grade cell?

make sure before submitting always run down all the cells.

if you are confident of your codes being correct, first disconnect the kernel, then reconnect the kernel and run each cell from beginning till end one by one making sure you have passed all the test before submitting.

In case while you running any of the cell and kernel gets disconnected and reconnect itself, you are suppose to run the cells from beginning till end one by one.

Then try submitting again.
Let me know if issue persist,

There are code errors in your assignment. You have tried to use numpy function recall everywhere which you were not suppose to.

while compute metric, you didn’t require to assign true and pred labels to the value you have used.

You didn’t share naive forecast codes, so cannot confirm

Your moving average forecast, where you forecast append, you have used np.mean which you did not require. use mean( ) to forecast.append code

Also the time-window_size: time is incorrect, it should be time:time+window_size

you didn’t share diff_series codes

for diff_moving_avg where you perform correct slicing, that is again incorrect, taking len of series_valid will not get you diff moving avg.
it should be [SPLIT_TIME-365-WINDOW_SIZE:]

You applied the same logic for calculating past series where you used len of series valid which is again incorrect, you are suppose to use SERIES.
if your naive_forecast codes are correct (as I didn’t see the codes), apply the same logic as used in naive forecast, only value here would 365 and in naive forecast it is 1.

Also from next time, please create first topic, if a mentor wants to see your codes they will ask for it. Don’t send direct DM for different topics, rather create a new topic. You can always tag anyone using @ if you want any particular mentor to look at your codes.

Whenever I used to come across a grader failure for my assignment, I used to go back to videos and u graded labs, on how I can make changes to my codes to get successful submission, this is is just a suggestion on how I used to debug my codes.

Both naive forest code and diff_series code are shared in the screenshot in our private message, please take a look.

Also even though the compute metric function isn’t right, but the grader output for the train_test_split function also returns 0 score, is that normal?

Thank you very much for reviewing!
I have a question about the compute metrics code, why can’t I call the arguments? tf.keras.losses.MSE | TensorFlow v2.16.1
because I checked the documentation, the function has two args y_true and y_pred.

your codes were good just that it got mixed up with arguments, confused between global and local variables and using incorrect recall function, which can happen with angone.