hello , i have understand the lectures of face recognition using simense network
but i have question , in real smartphone ,why don’t have built-in android app for face detection , i mean every modern phone has front camera, why does it not train a model to work in all phones ,even Laptops if used cross platform
@IMHO I think it is because here we learn a sort of ‘jr’ version of it-- A company has to have Enterprise grade detection, safe against law enforcement search, etc. And then, especially in the environment of current days (i.e. internet scraping), there is the question of where did the dataset of all the faces come from ?
I mean, yes there is ‘Windows Hello’, but I’m pretty sure that is working with a different algorithm (i.e. not a neural net).
But Prof. Ng is showing us a safe way this can be done with a NN in a constrained (i.e. you have pictures of all your employees) environment.
I am an iPhone user and have never tried an Android phone, so I can only comment on the iPhone side of the world. On the iPhones since sometime before the iPhone 11, they do use FR to unlock the phone. So they’ve obviously got a trained embedding model for faces that they use for that. There are also various apps on the phone that have FR technology in them like the Photos app that knows how to detect lots of things in pictures, including faces of individual people, as well as many other things.
So it’s just a question of how they make that functionality available to you. As Anthony says, they do have to take care that whatever apps they provide are safe and don’t present potential legal or privacy issues. What would be the point of making a generic app that you could point at anyone walking down the street and figure out who that is? There are some countries in the world where I’ve heard that the government actually does that, but it would seem a bit scary/inappropriate to deploy something like that on everyone’s personal smart phone.
I’ve never had an IPhone because I have always disliked Apple, but they also have a built in IR camera, like Microsoft Kinect/Intel Realsense did. I would presume they are using that.
*Should amend, Moz, and the story was great-- It was just ‘not for me’.
I do not mean using it to detect people ,I mean make a trained model ,and when someone login to it: take about ten images of the same person then retrain then use it for later to login using face ,and so we just need front camera to login not RFID
Such a system could easily be tricked by someone just holding up printed images in front of the camera.
thank you got it
That is very close to what Apple does, which I referred to in my post: when you first get your iPhone, you give the model various images of your face interactively and then it uses the derived embeddings to let you login and unlock the phone with the camera.
Interesting point. I have not tried to trick my iPhone in that way, but a quick google finds some articles explaining that they also use IR for depth sensing to defeat that strategy. So it could potentially be a risk, but they apparently thought of that.