It’s actually a good idea to try own questions to understand the feedback functions. For your example I would expect low context relevance and groundedness but still good answer relevance (because the LLM uses its own pretrained knowledge) if the topic is related to the indexed docs. If its not related at all, there might be intermediate prompts preventing an answer (which is what you want mostly in a real-world scenario) .