Hi, I am just about gathering steam in GenAI, hence trying to understanding the role of the LLM in L5. on chaining prompts. I am wonder how crucial was the role of the LLM.
Seems to me that we could have as well implemented the chatbot using the Python code and maybe be some additional code without the LLM. What role did “gpt-3.5-turbo” play? All the information required to answer the query was in the prompt itself. Nothing really came from the LLM.
Or is my understanding correct that the LLM facilitated the dialogue in natural language, but the information (categories + products) was all contained in the prompt(s)?
Cheers
Ram