Hi ,
I am facing two issues but it’s likely that both issues are related something to do with using tensor and numpy in the same equation?
Issue 1: All my tests pass but during grading it complains about datatype in weighted_loss function while calculating the final loss value.
Issue 2: It also seems like K.mean function doesn’t work, so I have to manually divide it by total samples.
I’ve explained each issue below
Issue 1:
Attempt1:
loss = -K.mean(loss/len(y_true))
print(loss.dtype)
*Your outputs:*
L(y_pred_1) = -0.49562032355455976
L(y_pred_2) = -0.49562032355455976
Difference: L(y_pred_1) - L(y_pred_2) = 0.0
<dtype: 'float64'>
Error: Data-type mismatch. Make sure it is a np.float32 value.
Attempt 2:
loss = tf.cast(-K.mean(loss/len(y_true)),tf.float32)
print(loss.dtype)
*Your outputs:*
L(y_pred_1) = -0.4956203
L(y_pred_2) = -0.4956203
Difference: L(y_pred_1) - L(y_pred_2) = 0.0
<dtype: 'float32'>
All tests passed.
HOWEVER, this is what I get when I submit attempt 2 for grading:
"Unexpected error occurred during function check. We expected function `get_weighted_loss` to return a function, 'weighted_loss', and the 'weighted_loss' function should return a type <class 'float'>. Please check that this function is defined properly. "
Issue 2:
loss = -K.mean(loss) calculates wrong value for loss
Your outputs:
L(y_pred_1) = -1.982481294218239
L(y_pred_2) = -1.982481294218239
Difference: L(y_pred_1) - L(y_pred_2) = 0.0
Error: Wrong output. One possible mistake, your epsilon is not equal to 1.
Thanks for any help