How to Simultaneously Use Sentence, Character, and Word Tokenization in AI Models

I have a question regarding the tokenization methods used in large language models like ChatGPT.
Specifically, I am interested in understanding :thinking: how to simultaneously use

  1. sentence tokenization,
  2. character tokenization,
  3. word tokenization
    to process a โ€˜single sentenceโ€™.
    For example, given the sentence:

โ€œIโ€™m really hungry. What should I have for lunch? I canโ€™t think of anything. Maybe Iโ€™ll have ramen?โ€

What criteria are used to choose and combine sentence, character, and word tokenization methods?
How do tokenization methods like โ€˜Byte Pair Encoding (BPE)โ€™ or WordPiece function in this process?
How does a model determine and optimize the use of these tokenization methods when processing specific text?
I would like to understand the detailed process of handling a sentence using these combined tokenization methods when developing an AI model.
Any references or advice on this topic would be greatly appreciated. thanks

import nltk
nltk.download(โ€˜punktโ€™)
from nltk.tokenize import [word_tokenize]

text =
โ€œIโ€™m really hungry. What should I have for lunch? I canโ€™t think of anything. Maybe Iโ€™ll have ramen?โ€
[word_tokens = word_tokenize(text)
print(word_tokens)] <-I said this point