Don't understand where the prompt goes with ConversationalRetrievalChain

In section 06_Chat, for the Conversation Chain we have:

from langchain.chains import ConversationalRetrievalChain
retriever=vectordb.as_retriever()
qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever=retriever,
    memory=memory
)

And as such have lost the prompt. What if we still want the prompt to help guide the solution. Does it somehow get stuffed into the Memory?

Also, “chat_history” isn’t specified on the result request, should it be:

result = qa({"question": question, "chat_history": chat_history})

Great, efficient course BTW.