Variable response content with consistent format?

Hi,

I have initialized my GPT model with temperature 0.7 because I need it to give variable responses for the content. However, when I do this it will occasionally provide a different output format (even though I have specified in the prompt that I want a specific output format). What I want is that the output format is always exactly the same, but the content of the response differs.

I was wondering how I can achieve this. Would the function “ChatPromptTemplate” be able to help with this?

Thank you!

Have you tried in-context learning? Maybe providing in your prompt one or a few examples of task and completion in the desired format helps.

Hi Leonardo,

Thank you for your response. Yes, I tried that. It works most of the time but still sometimes it does not work (interestingly if I prompt the GPT with a highly creative persona, it perform way better than when I give it a very analytical persona).

How I solved it now, is that I just put the response call in a while loop and only save the response if the response matches the required format. This is a bit of a waste, money wise, but oh well.

Hi Ivar,

It just occurred to me that you could also try the top-k and top-p parameters to avoid completions with too low a probability. This way, you have some creativity without an answer that is too far from normality. See if it works.