I am working on the first exercise of the course “Vector Databases: from Embeddings to Applications”. The calculated embedding values differ from what I see in the lesson. For example, when I execute the code inside the course, I see the values:

Embedding for the first ZERO is [-1.5423311 -0.21543208]

Embedding for the second ZERO is [-1.2455888 -0.17916496]

Embedding for the ONE is [0.7440857 2.2347686]

Whereas if I run the exact same sequence of code in a different Python environment, I get the following:

Embedding for the first ZERO is [-0.04116024 -0.12367383]

Embedding for the second ZERO is [ 0.02632285 -0.13507368]

Embedding for the ONE is [ 1.9754127 -0.02802112]

Since the input dataset is the same, the parameters are the same, which factors influence the embedding values?