Whenever I ran a CNN code again and again, I got different accuracy after each run.
I’m wondering if it’s okay to run the code again because it reads the entire thing over and over again.
Which week of the course and which assignment are you asking about?
Not about assignment
It’s a General doubt
Since the NN cost function is not convex, and the initial weight values are randomized, you can expect slightly different result each time you train a NN.
I asked which assignment you’re working on because in some assignments there are additional random factors applied within the notebook.
So, that is it ok to run a model a lot of times?
Not related to the course ,
But I’m working on something outside of this course, and the initial weights are randomized here as well.
Thank you so much
Hi there,
yes, it is OK. In fact it can make sense to understand the sensitivity of your optimization since it is in general stochastic.
Still, if you want to reach reproducibility, you can use the seed resp. the set_random_seed function.
As @TMosh mentioned, you can find corresponding examples in the course material.
Best regards
Christian
When iteratively checking on performance or tuning the solution, the validation set can be used. Since it is reused (e.g. for hyperparameter tuning or for the decision basis when the training is good enough in combination with performance on training data of course) at least implicitly information make it into the training process.
A final test set that was never seen by the model needs to be reserved for final and unbiased check as „litmus test“ for the model:
Best regards
Christian
This thread might be relevant for you as well:
Best regards
Christian