Remember: Understanding the LLM output variability and not just report “Bug: My output is different from Andrew’s video.”
-
Explanation: There are different parameters such as Temperature, Top-P, and Seeds, and also LLM models that due to its “stochastic nature” will output different results (answers) from the ones seen in the instructor’s notebook’s video. That is expected!
-
Expected vs. concerning differences:
-
Expected: Slight wording variations, creative differences -
Review needed: Completely wrong answers, errors, non-functional code
-
-
Troubleshooting checklist:
-
Verify model version matches course
-
Test multiple runs
-
The Reality Check: In Generative AI, different outputs are a feature, not a bug. Unless the code throws an error, a variation in the wording of the response is expected behavior.