How do Embeddings in Tensorflow work?

Laurence said in this video that this is not in scope of the video how Embeddings layer in TF works, but it would be really interesting. What course contains this topic?

Do I understand right, for example, that this layer just takes similarity of input digits and take in account theirs order? So, “cat” and “tac” will be in the same line, but in opposite directions?

Hey @someone555777,

I recommend exploring the NLP Specialization provided by Deeplearning.AI to delve deeper into how Embeddings work.

Let me provide a brief overview of Embeddings:

Embeddings serve as a method for representing categorical variables, such as words or tokens, in a continuous vector space. The fundamental concept involves mapping each unique category to a specific point in this space. In this vector space, the distances and directions between points encapsulate the relationships and similarities between the categories.

In the realm of words, embeddings are adept at capturing semantic relationships. Words with similar meanings or those commonly found in analogous contexts are typically represented as vectors situated close to each other in the embedding space. This enables a model to discern and understand the contextual relationships between different words.

To gain a more in-depth understanding, check out the NLP Specialization by Deeplearning.AI.

Best,
Jamal

1 Like

I passed this NLP course and I haven’t seen answer on the question how pure tensorflow Embeddings works, because we used Trax. You say about WordEmbeddings more. And I say just about Embeddings.

Can you maybe answer, are my thoughts right?

Hey @someone555777,

Embedding layers in TensorFlow are primarily used for capturing semantic similarity based on the context in which words appear in a given corpus.

For example, in a well-trained embedding space, words that often appear together, or in similar contexts, will have similar vector representations. This allows the model to understand and capture semantic relationships between words.

However, if you want to capture the similarity based on the sequential order of characters (as in your “cat” and “tac” example), a regular embedding layer may not be sufficient. In such cases, you might need to explore character-level embeddings or other models that explicitly consider the order of characters, such as recurrent neural networks (RNNs) or transformers.

Regards,
Jamal

oh, yes-yes. I meant char-based Embeddings. So, if say about standard embedding on sentence level example can be
“dogs like cats” and “cats like dogs”. Each word will be with absolutely the same embedding into each word and just will have different position on output of Embedding layer, that same as it was on input.
изображение

Values of embeddings of each words will be always the same, while we will not re-learn this Embedding layer? How does this learning appear by the way? It would be really interesting to know how Embedding try to locate the digit into its spaces (features).

I said about Embedding space here. But I recall that Embedding layer does’t have strict behaviour and initializes randomly if we use it without training for example.