At 4:00 in the video for https://learn.deeplearning.ai/advanced-retrieval-for-ai/lesson/2/overview-of-embeddings-based-retrieval you say that the maximum embedding context window length is 256 characters, but later on you clarify that it is 256 tokens, which I assume is correct. Otherwise picking a 1000-character default for splitting would make no sense.
Hi nealmcb,
Pretty sure it’s tokens.