How is Word embeddings vs Face encoding different?

i could not understand exactly how there is a difference between the two, in both cases we are trying to create an encoding to the specific input? so how are they different exactly?

I’m sorry. Could you please explain your question? It seems like you are asking the difference between word embedding (which is generated from a text corpus) and face encoding (which is generated from facenet by training on images of human faces). These doesn’t exhibit any similarity since the inputs to models generating these embeddings are different.

The idea of embedding is the same. It is to find a representation of input feature(s) in their respective domains.