Got "inf"/"nan" when use Tensorflow to optimize self defined loss

I defined a complexy loss function by myself and use Tensorflow Gradient Tape to optimize it.
But during the interation, I got “inf” and “nan”, I doubt it is caused by the “log” function in loss, it got log(0) maybe.
As I searched online, some solution mentioned to modify the log function to “tf.log(tf.clip_by_value(y,1e-8,1.0))”, I don’t know if this is correct or not.
From my perfective, it limits the value to (1e-8,1.0), which may not correct…

It’s always recommended to normalize the data before you pass it to a loss function. Once you normalize the data (like by using Z-score/standard normalization methods), you wont come across a situation like the case where the x in log(x) becomes 0.

But the method that you mentioned is not wrong. You can do by that way too.
The specific values of 1e-8 and 1.0 used in the clip_by_value function are arbitrary and can be adjusted as we wish.Here in this case it clips the value less than 10^-8 to 10^-8 and values greater than 1 to 1. But it “might” affect the results of the optimization step. So it is better to use normalization.

Hope you are enjoying the learning process! Have fun and take care.
Regards,
Nithin

1 Like

Thank you Nithin. I met some problem as you mentioned for the log(x), and solved it also by using the clip_by_value function.
Not sure how much impact on the model performance, but at least the model can continue…