I was implementing assignment( Implementing Callbacks in TensorFlow using the MNIST Dataset) on jupter lab during training after 1st epoch the accuracy was 0.9402 and after 4 epoch the callback was called and than programs end but the same code i was trying on google colab, after 1st epoch the accuracy was about 0.7 something and even programs not reached to callback at 100th epoch.
Could you set tf.random.set_seed(1) just after the imports and then run your notebook on either platforms? One reason could be that coursera is running on tensorflow 2.7 and colab is running on 2.8. That said, 100 epochs seems a lot. Please click my name and message your notebook as an attachment if the results are different after setting the seed.
oh thanks. Now its working
Glad its working sir! Setting the seed to fix value is an important trick as it does determine the initial starting weights on which the rest of the model is based. Also change of versions of Tensorflow and sometimes even running with and without GPU/TPU’s(Hardware accelerators) can change the accuracy you get!