concerning happyModel, I am wondering why loss and accuracy change greatly every time I run the same training for the network?

Thank you.

concerning happyModel, I am wondering why loss and accuracy change greatly every time I run the same training for the network?

Thank you.

It turns out that they do not set the TF global seed in this notebook, so everytime you instantiate the *happyModel*, you get different random initialization of the weights. The shape of the solution surfaces for Neural Networks in general is very complex. If you start at a different initial point, your convergence path will be different and your solution will be different.

You can make things more consistent and deterministic by setting the TF global seed. You can do it at the beginning of the notebook (after the imports) or just insert a cell right before the cell that creates (instantiates) the *happyModel* with a fixed value like this:

```
tf.random.set_seed(42)
```

Then try running multiple times with a “Kernel → Restart and Clear Output” in between and you should get consistent results.

Here’s the TF documentation on how random seeds work.

Thank you for your response!