Lab_2_fine_tune_generative_ai_model

In the Lab 2, we run zero shot inference twice for the example 200 on the original model (pretrained FLAN-T5 model). Why do we get different completion results? What am I missing?

1.3 - Test the Model with Zero Shot Inferencing
MODEL GENERATION - ZERO SHOT:
#Person1#: I’m thinking of upgrading my computer.

2.3 - Evaluate the Model Qualitatively (Human Evaluation)

ORIGINAL MODEL:
#Person1#: You’d like to upgrade your computer. #Person2: You’d like to upgrade your computer.

There must be a temperature parameter in the LLM itself, for certain settings the LLM will always return the same output for other is more variable. This all depends on the which output to choose if the temperature is not categorical than the LLM will choose one out of its top most categorized outputs, if its categorical will always choose the topmost probable output.

And you know LLMs work with probability outputs right!