Unclear Whether Lesson 6 Demonstrates True Vector Search + Knowledge Graph Enrichment

I’m trying to better understand the intent of Lesson 6. It seems to be showing how a Neo4j knowledge graph can enrich the results of a vector-based retrieval system. But when I looked closely at the two chains — plain_chain and investment_chain — I realized I wasn’t sure whether both are actually doing vector search. I’d like to share my understanding and ask for clarification in case I’m missing something.

Here’s how I see the two setups:

# plain_chain
retriever = Neo4jVector.from_existing_graph(
    embedding=OpenAIEmbeddings(),
    ...
    text_node_properties=["text"],
    embedding_node_property="textEmbedding"
).as_retriever()

# investment_chain
retriever = Neo4jVector.from_existing_index(
    embedding=OpenAIEmbeddings(),
    ...
    text_node_property="text",
    retrieval_query=investment_retrieval_query
).as_retriever()

From what I understand, the plain_chain embeds the user’s question and performs a vector similarity search against the textEmbedding property on Chunk nodes. But the investment_chain uses a custom Cypher query and doesn’t seem to use the question embedding or textEmbedding at all. That means it’s not doing a vector search — it’s retrieving a fixed set of investor-related chunks, regardless of what the question is.

If that’s correct, then the second chain isn’t really demonstrating RAG in the usual sense. It could also lead to much higher token usage since it may return more chunks that aren’t filtered by relevance. I was expecting the lesson to show how to combine both — semantic relevance via vector search plus graph-based enrichment — so I wanted to raise this in case others had the same confusion.