Groundedness with Langchain

Is it possible to apply the RAG Triad of metrics to Langchain implementation instead of Llama index?
I see that when you define a feedback function like in

qs_relevance = (
             name="Context Relevance")

you make use of a selector for the context selection:

from trulens_eval import TruLlama

context_selection = TruLlama.select_source_nodes().node.text

But I can’t see the equivalent of retrieving the relevant documents from a Vector renderer in Langchain.

Hello @menajosep Do you have the solution? I want do samething with langchain instead of llama_index.