In UNQ_C3 : triplet loss function, I am getting the results, 1 test case failed out of 6. I am not able to figure my mistake… Can anyone help me?
This is my Labid : cmirohde
In UNQ_C3 : triplet loss function, I am getting the results, 1 test case failed out of 6. I am not able to figure my mistake… Can anyone help me?
This is my Labid : cmirohde
I have (maybe) the same issue: 5 test passed, 1 wrong.
In the wrong one (which is also the example), the result is 1.0 instead of the expected ~0.7.
Is it the same for you?
Yes, the same issue.
So maybe we need help from @arvyzukai
I cannot check your solutions by labid. You could private message me your notebook and I can take a look at it.
Hi @arvyzukai , Can you please help us on this issue…?
Private message me your Assignment notebook and I will help you to get back on track
when you try to find mask_exclude_positives
value, make sure that you put your first statement in brackets (...)
so that you have:
mask_exclude_positives =(...) | ( ...)
and then you will have your loss ~0.7.
Cheers
Thanks @arvyzukai , This helped me solve the issue.
Thanks to those instructions I also managed to solve the issue.
Two comments:
note that the instruction within the code cell and the instructions within the markdown cell just above are different, maybe you people should uniform them.
Also the example is confusing: the two batches are just unexpected: while v1_1 and V2_1 are ‘similar’ (actually the same), the elements v1_2 and v2_2 are definitely ‘not similar’ (actually, the opposite). this confuses a lot while developing, since the element (2,2) in the score diagonal is NOT a positive, so all the variables definitions are shaky.
Thank you @Vincenzo_Lavorini for the feedback.
Could you please be more specific which instructions and code you mean in particular?
Also the example is confusing: the two batches are just unexpected: while v1_1 and V2_1 are ‘similar’ (actually the same), the elements v1_2 and v2_2 are definitely ‘not similar’ (actually, the opposite). this confuses a lot while developing, since the element (2,2) in the score diagonal is NOT a positive, so all the variables definitions are shaky.
Again I reread your comment couple of times and could not understand the issue. Could you print screen and highlight the parts that causes confusion?
For the different instructions: instead of using mask_exclude_positives =(...) | ( ...)
to build the negative_without_positive
, as you specified above, in the markdown cell above the code cell for the TripletLossFn we read:
Next, we will create the closest_negative. […] To implement this, […] Multiply
fastnp.eye(batch_size)
with 2.0 and subtract it out ofscores
. The result isnegative_without_positive
[…]
For the batch:
v1 = np.array([[ 0.26726124, 0.53452248, 0.80178373],[-0.5178918 , -0.57543534, -0.63297887]])
v2 = np.array([[0.26726124, 0.53452248, 0.80178373],[0.5178918 , 0.57543534, 0.63297887]])
With such batch, we get:
scores:
[[ 1. 0.9535077]
[-0.9535077 -1. ]]
positives:
[ 1. -1.]
negative_zero_on_duplicate
[[ 0. 0.9535077]
[-0.9535077 -0. ]]
mean_negative [ 0.9535077 -0.9535077]
mask_exclude_positives
[[ True False]
[ True True]]
negative_without_positive
[[-2. 0.9535077]
[-2.9535077 -2. ]]
closest negative
[ 0.9535077 -2. ]
triplet_loss1
[0.20350772 0.29649228]
triplet_loss2
[0.20350772 0. ]
Triplet Loss: 0.7035077
Note that:
positive
: we get two values, the two on the diagonal, but only one is actually positive (v1_1 with v2_1), and not the second onenegative_zero_on_duplicates
: we get 0 also where there are not duplicates, i.e. in the position (1, 1)negative_without_positive
the value in (0, 1) change value!closest negative
: the second value (-2) should not exist, and anyway is close to what?Do you mean that the markdown cell does not mention the second part of the mask (that we exclude higher than the positive values) or that the mask (as a helper variable) is not mentioned at all?
In regard to the second point - yes, I totally agree that the values and the dimensions (batch size is now 2 instead of 4) are poorly chosen (especially if you try to go step by step). My thoughts are that they were chosen to just quick test the function but more thought could have been put to it. Bigger batch size and more representative values would have made much more sense.
It describe just another technique, I bet that markdown cell was created for a previous version of the function.
Glad I am not the only one that think that example should be updated.
Anyway, thank you for all, for the suggestions and for the hearing
Cheers!
I have the same issue, but my Triplet Loss is 0.7964923, still cannot pass
If anyone might get the same result, you probably need to reshape the last variable in the second brackets:
mask_exclude_positives =(...) | (... .reshape(batch_size,1))
I’ve been able to get the matrices shown above that do not depend on mask_exclude_positives
, but the latter has eluded me for the past three days. I feel like I’m on the right track because I am getting many of the matrices discussed above correct. But the information in More Detailed Instructions makes no reference to the mask at all, and indeed describe how to find a value for negative_without_positive
without using a mask, as well as showing that closest_negative
can also be derived without the mask.
Can I get some explanation of how to calculate the mask (in particular, the second one) so that I may continue this assignment? My id is agoixzfg.
You are correct, More Detailed Instructions conflict with the comment in the code, namely:
Multiply
fastnp.eye(batch_size)
with 2.0 and subtract it out ofscores
. The result isnegative_without_positive
.
vs:
# multiply `mask_exclude_positives` with 2.0 and subtract it out of negative_zero_on_duplicate
Both ways are possible/correct to find negative_without_positive
. I would suggest the latter
Regarding to finding the closest_negative
in both (instructions and code comments do not mention mask and it is not needed (directly, because indirectly it is already used to find negative_zero_on_dublicate
).
Try to find negative_without_positive
first - multiply mask_exclude_positives
with 2.0 and subtract it out of negative_zero_on_duplicate. The closest_negative
is easier - the code comment gives you exact code: Hint: negative_without_positive.max(axis = [?]), and More Detailed Instructions give you the hint for the axis.
If you still won’t be able to implement the code please feel free to private message me your Assignment notebook and I will help you with your code.
Cheers
Hi, @arvyzukai,
I had really hoped not to have to bother you again, but I am still getting an error (on the first test after defining TripletLossFN(v1,v2). I get 5 correct and 1 fail, and I’m net even confident that that is the correct count.
I followed your initial directions with respect to getting the result for negative_without_positive without using the mask†.
How do I private message you?
Thanks!
† I also tried the masks, but I am getting the transpose of the second mask. I’m using (negative_zero_on_duplicate >= positive.reshape(batch_size)), but that is obviously wrong. I could go into more detail, if you want.
I’m still looking over my results. The triplet loss should be ~0.7035, but I’m getting 0.5, which is consistent with my loss1 = 0.9535077-0.9535077, and loss2 = 0.20350772+0.29649228, with the full loss at 0.5. The loss of 0.7035 would result if Loss1 summed to 0.5 and Loss2 was just 0.20350772. So, I can see what is going on, but don’t know how to fix it.
Is there no one else that is running into these discrepancies?