Problem with weighted loss

{moderator edit - solution code removed}

This function returns the following error after running

[[1. 1. 1.]
 [1. 1. 0.]
 [0. 1. 0.]
 [1. 0. 1.]]

w_p:
[0.25 0.25 0.5 ]

w_n:
[0.75 0.75 0.5 ]

y_pred_1:
[[0.7 0.7 0.7]
 [0.7 0.7 0.7]
 [0.7 0.7 0.7]
 [0.7 0.7 0.7]]

y_pred_2:
[[0.3 0.3 0.3]
 [0.3 0.3 0.3]
 [0.3 0.3 0.3]
 [0.3 0.3 0.3]]

If you weighted them correctly, you'd expect the two losses to be the same.
With epsilon = 1, your losses should be, L(y_pred_1) = -0.4956203 and L(y_pred_2) = -0.4956203

Your outputs:

L(y_pred_1) =  0.4942571
L(y_pred_2) =  -0.49425706
Difference: L(y_pred_1) - L(y_pred_2) =  0.9885142 

Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
 4  Tests passed
 2  Tests failed
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-63-9fde66515dc4> in <module>()
      5 ### do not edit anything below
      6 sess = K.get_session()
----> 7 get_weighted_loss_test(get_weighted_loss, epsilon, sess)

~/work/W1A1/public_tests.py in get_weighted_loss_test(target, epsilon, sess)
    180     ]
    181 
--> 182     multiple_test_weight_loss(test_cases, L, sess)

~/work/W1A1/test_utils.py in multiple_test_weight_loss(test_cases, target, sess)
    141         print('\033[92m', success," Tests passed")
    142         print('\033[91m', len(test_cases) - success, " Tests failed")
--> 143         raise AssertionError("Not all tests were passed for {}. Check your equations and avoid using global variables inside the function.".format(target.__name__))
    144 
    145 

AssertionError: Not all tests were passed for weighted_loss. Check your equations and avoid using global variables inside the function.

Hi @Abhilash_Dhal,

You need to revisit the equation above the exercise. You need to account for epsilon as indicated:

“Note that we also want to add a small value, 𝜖, to the predicted values before taking their logs. This is simply to avoid a numerical error that would otherwise occur if the predicted value happens to be zero.”

You also need to calculate the mean using Keras.mean:

“Recall that for the multi-class loss, we add up the average loss for each individual class”

Regards,
Samuel