Groundedness with Langchain

Is it possible to apply the RAG Triad of metrics to Langchain implementation instead of Llama index?
I see that when you define a feedback function like in

qs_relevance = (
    Feedback(openai.qs_relevance,
             name="Context Relevance")
    .on_input()
    .on(context_selection)
    .aggregate(np.mean)
)

you make use of a selector for the context selection:

from trulens_eval import TruLlama

context_selection = TruLlama.select_source_nodes().node.text

But I can’t see the equivalent of retrieving the relevant documents from a Vector renderer in Langchain.

Hello @menajosep Do you have the solution? I want do samething with langchain instead of llama_index.