The uses of Tokenizer

Hi folks! I have a question that came up during my first week of the Generative AI with LLM course.

Once we have selected the tokenizer to train the model, we must use the same tokenizer when we generate the text. Let’s assume that we have a pre-trained model (e.g., GPT-3). This model is already trained with OpenAI Embeddings. We have a task to fine-tune this model with our own data, and we use HuggingFace Embeddings.
Is it okay to use HuggingFace Embeddings for the fine-tuning process, or should we use OpenAI Embeddings?

Is the tokenizer the same in both cases, if not there will probably be differences on how they tokenize the sentences, if not it should be OK. I dont how similar those 2 embedding techniques are but if not similar they could have a different impact on the overall performance of the model.