Weighted loss getting nan for

Hello Team,
I am stuck at week1, assignment 3
For weighted loss function i am trying below code

  for i in range(len(pos_weights)):
            # for each class, add average weighted loss for that class 
            y_pred+=epsilon
            p_loss = pos_weights*(y_true)*tf.keras.backend.log(y_pred)
            n_loss = neg_weights*(1-y_true)*tf.keras.backend.log(1-y_pred)
            loss += -1*tf.keras.backend.mean((p_loss+ n_loss))   #complete this line
        return loss

However, when i run the code i am getting

L(y_pred_1) =  nan
L(y_pred_2) =  nan
Difference: L(y_pred_1) - L(y_pred_2) =  nan

Not sure whats wrong. i can understand if values are wrong but not sure why i am getting it as nan ( not a number)
any pointers to resolve the issue are appreciated.
Thank you !
Aditya

I’m not an AI4M mentor, but note that NaN is what you get if you take the log of 0 or a negative number. You might want to check your y_pred values to make sure none of them are <= 0 or >= 1.

If you added that += epsilon logic to try to avoid that case, note that won’t help in the case that y_pred starts out as 1, rather than 0. p_loss will be ok in that case, but n_loss will be NaN, right?

Hi @Aditya_Deshpande ,

As you can see you are in a for loop, iterating with “i” through range(len(pos_weights)), however you are not using that “i” in any of the objects within the “for” loop. For instance, you may want to use pos_weights[i] instead of just pos_weights.

So check all your objects and see when/where to access them using the “i”.

Does it make sense?

Juan