Hello,

I do not understand what’s wrong with code. Why the L2 distance is so large? Please help!

The following is my implementations. Thanks!

Hello,

I do not understand what’s wrong with code. Why the L2 distance is so large? Please help!

The following is my implementations. Thanks!

You are just creating a 2-tuple of the two encodings and then taking the norm of that. That is not what the instructions tell you to do, right? You want to compute the distance between the two encodings, which is defined as the L2 norm of their difference.

Hi, I didn’t get what do you mean Paul?

The material in the notebook explains all this. In the section on the verify function (exercise 2) it explains that what you are supposed to do is to compute the “distance” between the encoding of the image of the person requesting entry and the corresponding encoding in the database for that “identity”. In the earlier triplet loss section, they use the square of the L2 norm of the difference between two encodings as the definition of “distance”. In this section, they define the “distance” between two encodings as the L2 norm of the difference between the two encodings. The encodings are 128 entry vectors, which are the output of forward propagation using the pretrained model that they provide here.

Everything I’m saying here is just repeating the material in the notebook. If what I said above does not make sense, then it would be a good idea to read the notebook again from the beginning. It’s all explained there.

1 Like

Thank you Paul, Why I cant pass dist = np.linalg.norm((database[identity],encoding)) ?

Because that does not take the norm of the difference of the two vectors. Where in that expression is the subtraction? What that code does is concatenate the two vectors to form a 256 entry vector and then takes the norm of that. That is *not* the same thing.