Hi,

This line is clear first layer in the model, so we have index = 0 layers[0]

#
Get the embedding layer from the model (i.e. first layer)

embedding_layer = model.layers[0]

Why we have strange design for weights “get_weights()[0]” why array and not just get_weights()?

In which cases can be more elements?

#
Get the weights of the embedding layer

embedding_weights = embedding_layer.get_weights()[0]

Hello @Taras_Buha

Welcome to our Community! Thanks for reaching out. We are here to help you.

First, it’s essential to know why we did that process. The main reason is to visualize the word embeddings; you can extract a lot of potentially useful embeddings by looking at the weights of a layer of the model.

About the line:

`embedding_weights = embedding_layer.get_weights()[0]`

This function returns a list consisting of NumPy arrays. The first array gives the layer’s weights, and the second array gives the biases.

In this case, you are taking the first array that gives you the layer’s weights. If would you like to get the bias, you can change it for

```
embedding_bias = embedding_layer.get_weights()[1]
```

Hopefully, help

With regards,

Hi @adonaivera

Thanks very much for the answer.

That’s clear about NumPy arrays.

My question was more about TensorFlow syntax. What not to implement something like, more intuitive.

for weights

```
embedding_weights = embedding_layer.weights
```

for bias

```
embedding_bias = embedding_layer.bias
```

Best regards, Taras

Totally agree with you @Taras_Buha

We can suggest that for the TensorFlow community; thanks!

Is the word embeddings are the output of the `Embedding layer`

? The input of `Embedding layer`

is the interger matrix undergo preprossing and tokenize. But in the code we just regard the weights as the `embedding_weights`

other than the output of the Embedding layer that is `(weights * input + bias)`

. Am i wrong with the understanding , I am so confused with the visualizing the word embeddings here. Thank you very much !

Hi Ying,

I’m not entirely clear on your question. If I understood you correctly, would you like to know how to visualize the word embeddings in more detail?

With this function `embedding_layer.get_weights()`

, you can get only the weights and the bias; you don’t get the input because the input for the architecture is variable.

An excellent way to go deeper into the visualization is with this case of use:

Here you can use TensorBoard to visualize the weight and bias or the architecture in a better way

Hopefully, help