How reliable is the output of LLM when compared to the logical computation

in a logical computation, the output is always predictable, consistent, reliable. the way it is being programmed is how it is going to work. but when it comes to LLM, the response is going to be random, we are completely dependent on the black box in which the information is being processed. What is the response is going to be is unknown. All that I have control over is only on prompting. What have to be included in the prompting to get the consistent, and reliable response, which is predictable.