In this lab, why are we asking the LLM to produce code with “no hard-coded values”, if the python code is going to be generated per-user-request?
Because of that sentence in the prompt, the code the LLM generates is artificially complex - i.e.
shape_match = re.search(r"\b(round)\b", user_text, re.IGNORECASE)
shape_word = shape_match.group(1).lower() if shape_match else None
it uses a regular expression to find the word “round” then it pretends that it doesn’t know what word is looking for, and retrieves it from the match. Then later on it uses the shape_word as one of the conditions for the query. But all along we know that the value should be “round”.
Similarly for price_max. We know the maximum price from the prompt, and the LLM could have hardcoded it in the python code, yet we are trying to parse it from the original query. We end up generating far more code than necessary, and it’s not really generic, and even if it was, it doesn’t need to be - we’ll never reuse this code.
Am I missing something?
Thanks.
I modified the suggested prompt to say “You can hardcode the values in the python code.” (instead of “DO NOT hardcode…”) and the python code is much shorter and simpler and still functional.
1 Like
Hi Chanquete,
Thanks for your great question.
It seems to me that this is done in light of best practices to avoid loosing flexibility or getting bugs due to hard-coding. It is hard to foresee all use cases, and a prompt may be re-used in a different context.
So I would say that, yes, hard-coding can be more efficient in specific cases, but it is in light with best practices to instruct the model not to do so.
hi @Chanquete
there was another learner having issue with hard-coding as he felt hard-coding is a bug
in the code, which I surely disagreed and explained it is just a way of code implementation, and agree with your points if LLM can produce much simpler codes and resolve the issue with area of interest, I would prefer anything if that doesn’t throw an error.
But just to confirm I went through module 5 and what I found actually the developers don’t hard code the round shape sunglasses probably they want llm to follow the 2.3 Try a sample prompt(planning in code) which is a case sensitive related to the round sunglasses from json file documents it’s reviewing.
So you neither wrong nor correct(when you mentioned they don’t need to mention prompt don’t need to be hard-coded), it just the way developers wanted agent to make sure catch the word round as per content reviwed as @reinoudbosch, it’s all about flexibility and probably they must have tried generic to more specific prompt and choose what best could lead to less chances of conflict in llm response.
Although I agree with the max_price argument prompt (but allowing customer service agent to review through the original query, let’s an llm review in case a conflict query might have been asked by customer and by chance if an llm has hallucinates the information from the prompt, llm can still go back to original query, review and response when a confused or incomplete question is asked by customer.
Hope this make sense, feel free to discuss.
Regards
DP