C1_W1_Assignment - weighted_loss

I’m trying to test the weighted_loss(y_true, y_pred), the following was the output:

Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
4 Tests passed
2 Tests failed

for i in range(len(pos_weights)):
# for each class, add average weighted loss for that class
loss_pos = -1 * K.mean(pos_weights[i] * y_true[:,i] * K.log(y_pred[:,i]+epsilon))
loss_neg = -1 * K.mean(neg_weights[i] * (1 - y_true[:,i]) * K.log(1 - (y_pred[:,i]+epsilon)))
loss += loss_pos + loss_neg
return loss

It results in the following error:
AssertionError: Not all tests were passed for weighted_loss. Check your equations and avoid using global variables inside the function.

Any guidance would be appreciated. Thanks.

Hi @slnarayan,

I think I found the mistake in your code:
K.log(1 - (y_pred[:,i]+epsilon))
should be:
K.log(1 - y_pred[:,i]+epsilon))
=> your code is subtracting 1 from y_pred & epsilon, while it should be 1 - y_pred + epsilon

Hope that helps. Let me know if it works.

Samuel

Hi @slnarayan ,

In addition to the points mentioned by another mentor, I am also concerned about the following point.

As the description of the overall average cross-entropy loss in section 3.1, the loss is the following form.
L = - 1/N(sum_{log{f(x)} - sum_log{1 - f(x)}})

However, in your codes, you separate the loss as follows:
loss += loss_pos + loss_neg,
where loss_pos = -1 * K.mean(A) and loss_neg = -1 * K.mean(B).

You should be careful of the fact that the following facts are NOT always guaranteed.
K.mean(A + B) == K.mean(A) + K.mean(B)

It might help that you reconsider when to take the average by K.mean() according to the definition of the loss function.

Hope that helps,
Nakamura

Thanks Samuel, you are correct.
As epsilon is to be added to the predicted value, in this case (1-y_pred). Appreciate the help.

Thanks Nakamura, appreciate your insight. I’m a bit confused, my understanding is “The mean of a sum is the sum of the means”, I would like to understand when would such conditions not be guaranteed ? It is a curious question, hope it is alright to ask.

Hi @slnarayan ,

How about the following example?
We think the case whether mean(C) = mean(A) + mean(B) is correct, where A = (1, 2, 3, 4), B = (6, 8), and C = A + B = (1, 2, 3, 4, 6, 8).

The mean of A is “2.5”.
And, the mean of B is “7.0”.
However, the mean of C is “4.0”, NOT mean(C) = mean(A) + mean(B).

I have imagined the above example.

You can try the code below.

import numpy as np

A = np.array([1, 2, 3, 4])
B = np.array([6, 8])
C = np.concatenate([A, B], axis=0)

print(f"Mean of A: {np.mean(A)}")  # 2.5
print(f"Mean of B: {np.mean(B)}")  # 7.0
print(f"Mean of C: {np.mean(C)}")  # 4.0

Hope that helps,
Nakamura

Thanks, appreciate the demonstration through an example. I get it. I’ve changed the code appropriately and it works. Good mentoring…….

1 Like

Respected sir/ madam
am getting return loss as syntaxError in weighted loss function kindly correct me
Thank you so much

Hello @slnarayan , I was just wondering why in the definition of loss_neg you put a (-1) on the outside. On the assignment it says:

loss_neg = w_neg x (1 - y_true) x log(1 - y_pred)

But you have done:
loss_neg = -1 x w_neg x (1 - y_true) x log(1 - y_pred)

Thanks, would love to hear why this is the case (as making this correction to my code made it work). Would also appreciate your input @nakamura .

Hi @Jainil_Shah1

It may be a little confusing, but the expression has parentheses surrounding each term, and the parentheses are multiplied by -1.

L = - ( w * ylog(f(x)) + w * (1 - y) * log(1 - f(x)) )

Hope it may help you
Nakamura

if i have this arrays:
[0,0,1,1,0] with probality [0.2, 0.4, 0.3, 0.1, 0.8]
calculating
Wp =3/5 and
Wn = 2/5
Loss P total = 3/5* - ln(0.3) + 3/5* -ln(0.1) and
Loss N total = 2/5* -ln(1-0.2) + 2/5* -ln(1-0.4) + 2/5* -ln(1-0.8)
???
Please helpme i not undestarnd

Hello @Jair_Alexander_Lozan

Sorry for the delayed response. I hope your issue resolved by now, if not let me know.

Regards
DP