hey in C2W1 I used rescale=1./255.
and I cleared the model with accuracy.
So why do I need to use rescale=1./255 (I mean how does placing or not placing integer to 255 make difference in training the model as I can see no difference in this assignment cell too for week 2)
It doesn’t make a difference, because of the type coercion rules in python. If any element of an expression is a floating point value, then all the values will be coerced to float.
The place where you might get in trouble is if you say this:
m = 5
x = 1 / m
In python 2.x, the computation will be done in integer arithmetic and the answer will be 0 as an integer.
But they changed the rules in python 3.x, so that you get 0.2.
hmmm so the accuracy issue in my model must be not related rescale 255
ok I will check, where I need to work on. Thank you Paul. I understood the integer reference you gave but when I remember when I was doing a course in deep learning.ai, the integer made a difference in a test code. So wanted to confirm this part. Thanks again
There used to be cases in TensorFlow where the type coercion rules were less permissive than they are in general python. In the Art Generation exercise in DLS C4, there used to be some landmines you could step on, but those have also been fixed now analogous to the python 2.x to 3.x change I showed above.
yes this part I did understand from the video. Hey your answer got me to ask one more doubt. So the rescaling will depend on the image bit representation. what if there is variation in bit in training image representation and validation image representation, does that still have an effect on rescale?