Hello everyone,
In the lab Face_Recognition.ipynb
they mention “By the end of this assignment, you’ll be able to apply the triplet loss function to learn a network’s parameters in the context of face recognition”.
Nor does the course explain how to proceed in details.
Any hint?
regards,
Francis
The point is that now that you know the cost function, you can run the training to learn parameters. But training a real solution for something like this is quite expensive and they don’t really show us any more details. Nor do they really discuss the architecture of the FaceNet model, although we import and use it in the assignment. You could look at the model: print the “summary()” output for it after it is loaded. They point out that it is based on the Inception architecture and give you a link to that paper.
In terms of how to train such a model yourself, we’ll have to do more research. There is a reference section at the end of the assignment that lists a number of sources. My suggestion would be to start by reading the FaceNet paper and see what they say about training. If you get lucky, perhaps they have a github repo.
Thanks Paul.
I read the interesting paper and course on siamese networks again.
I understood, the principle is to run 3 copies of the Networks in parallel for anchor, positive and negative input, then apply the Loss, and back-propagation to one of them and copy the weights to the 2 remaining ones. Not to mention the critical choice of triplets.
I found this repo based on PyTorch and MNIST dataset.
Bye
1 Like
Thank you for sharing what you learned!