Poor results on real data with the Face Verification model provided in the programming assignment

Hi there !

I’ve tried to run the model provided on various photo (faces) of mine. The model appears to not be robust enough to maximise or minimise the distance between simple webcam faces. Could you tell me why it works so good on Kian and all faces but I can’t get even a slightly good performance with my photos.

Also :

  • I suspect that the distribution between the train dataset and my photos are not the same. Should the photos be normalized ?
  • it is not mentioned on which dataset the model has been trained (is it a known one ?)
  • is it possible to create a decent face verification app by oneself (I mean, without a huge amount of data and maybe by using Google Colab) ?
  • Should the model be that complicated to do face verification ?
  • Does the size of the embedding is a hyperparameter ? (is 128 better than 256 ?..)

Thanks a lot for your help !!

1 Like

The model was trained on images that are normalized in a particular way. So it won’t perform well at all if you feed it “raw” images. This was described in the assignment.

I have not personally looked into how the FaceNet model was trained, but on the other thread where you asked this question I had given a link to a tutorial on this subject by Jason Brownlee. If you want to get a deeper understanding of all this, reading that article might be a good start in that direction.

Yes, the dimension of the “embedding space” is a hyperparameter. But of course changing that means you are starting from scratch, right? In other words, it’s no longer the pretrained FaceNet model any more. You’d have to redo literally everything from scratch including all the training. That is a highly non-trivial effort. Google trained the model and they’ve got more compute resources than you do. :nerd_face: :laughing: