Weighted loss error

I get this error:

If you weighted them correctly, you’d expect the two losses to be the same.
With epsilon = 1, your losses should be, L(y_pred_1) = -0.4956203 and L(y_pred_2) = -0.4956203

Your outputs:

L(y_pred_1) = 0.97540486
L(y_pred_2) = 0.97540486
Difference: L(y_pred_1) - L(y_pred_2) = 0.0

Apart from any error I might have made, what I cant understand is how L(y_pred_x) can be negative. The “log” is a negative and it is multiplied by a “-1” and all other terms are positive so how can this be a negative number ?

Without exposing actual code, for the positive weights
-1 * pos_weights * y_true * K.log(y_pred) must be postive right?

Or is my understanding not correct? Been stuck on this for a few weeks now :frowning: Do help.


Hi @getjaidev ,

Taking what you say

“-1 * pos_weights * y_true * K.log(y_pred) must be postive right?”

Lets break this down:

pos_weights: positive
y_true: positive
log(…): positive

so pos_weigts * y_true * log(…) > 0

… but at the beginning you have -1. That will turn this into a negative number.

Now lets see the entire exercise:

The formula that we want to implement is:


This formula includes an array of positive weights (wp) and an array of negative weights (wn).

So we accumulate in a variable called ‘loss’ the negative product of (wpylog(f(x))+wn(1−y)log(1−f(x))).

As you very well say, the logs will produce a positive result, and we know that ‘wp’ is positive, but we have ‘wn’, which is an array of negative weights ( as per definition of args in the class).

So, how can L(y_pred_x) be negative? well, there is a possible cause for this:

if (wpylog(f(x))+wn(1−y)log(1−f(x))) > 0 then, by multiplying this by -1 we get a negative value. And even if ‘wn’ was an array of positive values, this would still hold true.

What do you think?



I have used both (again not showing the indices since I do not want to show the code):

        loss_reg_p=-1*(pos_weights * y_true * K.log(y_pred))
        loss_reg_n=-1*(neg_weights * (1-y_true) * K.log(1-y_pred))
        loss_reg = loss_reg_p+loss_reg_n

Now if pos_weights and neg_weights are positive, y_true is positive and log (1-y_pred) is negative then loss_reg_p and loss_reg_n must both be positive because of the -1, right?

Hi @getjaidev ,

I also suppose your thought is right.

However, you may miss the condition of the code execution, where we set epsilon = 1 for K.log(y_pred + epsilon) in the assignment.

When epsilon = 1, K.log(y_pred + epsilon) must be positive.

I hope you will resolve this by rechecking the code of Exercise 3.

Best regards,

Hi Juan…

How is “wn” negative? My output shows this:

[0.25 0.25 0.5 ]

[0.75 0.75 0.5 ]

Nakamura… Thanks. That explains a lot.

I have really been struggling a lot with this. I don’t think the epsilon is being used in the tests since I get only positive returns.

Also I see this as the last two lines of output…

Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
4 Tests passed
2 Tests failed

Hi Nakamura…

Solved it. Your post was very helpful. It was indeed the epsilon issue.

Thank you.

1 Like

Juan… Please ignore this question. Thanks for your help. It was an epsilon issue.