Langchain question

I was wondering how I might learn more about how langchain went about creating the query used to retrieve the relevant chunks to create the prompt for the llm.