In the second video, minute 7, Anton Troynikov says that the representation vector of the 10th text chunk has dimensions of **358**. But when I run

`len(embedding_function([token_split_texts[10]][0]))`

I get **946**. Is my understanding of the number of dimensions of a vectorial representation of a text chunk wrong? Shouldn’t it just be the length?

I think there is a mistake in the code you have written.

Try this:

print(len(embedding_function([token_split_texts[10]])[0]))

Thank you. I modified my code to this:

```
arr = np.array(embedding_function(token_split_texts[10]))
arr.shape
```

and got (946, 384).

I still don’t understand how to interpret this. There are 946 vectors of dimension 384? It still doesn’t make sense that the vector should have a dimension of 358.