Appending answer to prompt in PAL models

In the above slide from 5:20 Week 3 - Program-aided language model (PAL), after appending the answer generated by Python back to the prompt, why do we want to run the LLM again? Why not just use the answer appended prompt as the completion?

1 Like

ITs a good question and I wondered about it too, I think for 2 reasons, your communication is through the LLM, it knows how to present and separate information, the second should be be that the LMM gets a context from it, in further use.