Hello there!
I am working on the triplet_loss()
function in C4 W4 P1 UNQ_C1, and I have a conceptual question on the following test case:
y_true = (None, None, None)
y_pred_perfect = (
[1., 1.], #Anchor
[1., 1.], #Positive
[0., 0.,] #Negative
)
loss = triplet_loss(y_true, y_pred_perfect, 3) # alpha = 3
assert loss == 1., "Wrong value. Check that pos_dist = 0 and neg_dist = 2 in this example"
- We have a perfect match between
y_pred_perfect[0]
andy_pred_perfect[1]
- We have a perfect non-match between
y_pred_perfect[0]
andy_pred_perfect[2]
- We have two samples of each (
m = 2
)
Thus, when indeed pos_dist = 0
and neg_dist = 2
, the result of loss is 0
instead of 1
:
Question
basic_loss (loss) is -1.0 = 0.0 - 4.0 + 3 #diff of the squares plus alpha
loss (ā of m losses) should be 1. an it is 0. #grader wants 1 but max(-1., 0.) = 0.
Why does the grader expects a loss of +1 (assert loss == 1
) when the max of the loss in this test case is 0, is it not?
Thanks for any feedback!