Setting the same seed for Tensorflow can help to ensure that the code produces consistent results across different runs. However, the random number generator is not the only source of randomness in Tensorflow, there are other factors such as the order in which data is processed or the initialization of variables can also affect the results of the code. So, it is possible for different versions of Tensorflow to produce different results, even when the same seed is used.
To ensure that code is deterministic, it may be necessary to take additional steps beyond setting the random number generator seed. Such as using specific Tensorflow functions or features that are designed to be deterministic, or carefully controlling other sources of randomness in the code.
You can check this links that may help you to understand why this have to be made in order to avoid controlling for several factors.
Thank you @pastorsoto and @Alexandros_Tzoumas for asking and answering this question.
I think that I understand the need to have deterministic runs and that setting predefined W and b will keep the “notebook robust to changes in Tensorflow over time” but isn’t this ruining the learned (model obtained) W and b?
I’m very confused about this step. Why does it ignore the calculated weights? What are the “saved weights from a previous training run”? How was it loaded? Why using these weights (instead of using the calculated weights) enables “this notebook to remain robust to changes in Tensorflow over time”?
The idea is that, technically speaking, training the model with the same code over different versions of Tensorflow may result in different trained weights.
The lab does not want any difference because its subsequent discussion depends on one particular set of trained weights.
To make sure the same set of weights, one way is to make sure you, I and every learner to run the notebook in the same coding environment and in the same manner. However, the lab designer cannot control how we run the notebook, and they cannot control which version of Tensorflow Cousera is giving us.
Therefore, the last resort is, let us train the model once to see how things happen, and then ignore those potentially different trained weights, overwrite them with the set of weights that the lab designer was going to discuss in the remaining parts of the notebook.
Thank you for the answer @rmwkwok. I tried to run the following code using different weights. The only result that would change is the values of predictions. Why does it matter if this changes?
You have focused on the place that does not matter.
@WeiWei_Wang, I think it is an excellent thing for you to try and challenge, when it does not hurt anything, because it is one of the good ways to learn quickly.
However, you had not gone far enough. If I were you, I would
Backup the original weight values. (This is essential, because you may want to change it back and forth to see different differences. Also, knowing how to go back is necessary to call it “not hurt”.)
For the sake of seeing the difference, change the weight values more radically - e.g. to -9999999999. My suggestion is arbitary.
Go through the whole notebook - which is the point that apparently your last screenshot had missed. Did you share the last screenshot because you had confirmed that all subsequent discussion saw no significant derivation?
Good luck trying, but remember to restore the notebook back to default or put a note somewhere to remind your future self of what happened