Triplet Loss *CORRECTION* necessity


I don’t get the reason of this correction. I see what Prof. Andrew mentioned in the course the correct. because when encoding is independent of input image, then the output is always zero.

"Another way for the neural network to give a trivial output is if the encoding for every image was identical to the encoding to every other image, in which case, you again get zero minus zero ",

He means that you get " N minus N" , which is zero.

1 Like

Hi @saiman,

Yeah, I agree with you, the correction muddies things. I think zero minus zero makes sense in the context they were given.

I’m not sure how to request a correction on there but you should flag it as “Content Improvement”. I’ve done so too.

Thanks for helping make the course better for everybody.


thanks @neurogeek for you answer.