Hi there! I’ve been trying to make a meal plan based on a daily macronutrients target but after 3 days of using different prompts, ChatGPT keeps making simple arithmetic mistakes and gives a meal plan with significantly inaccurate results
Here’s the latest prompt I’ve used:
"Pretend you are an expert in health, nutrition and fitness (You’re supposed to read the top scientific papers around health and fitness).
Your task is to provide a meal plan (1 day) of 3 high-protein meals that the sum of their total macronutrients (Macronutrient target) is:
“Protein: 169g”.
“Carbs: 94g”
“Fats: 50g”.
To make this task do the following:
First, look for high-protein ingredients and meals that are tasty and determine the exact content of protein, carbs and fats of each ingredient.
Then, make the meal plan adjusting the quantity of each ingredient so the sum of the total meals macronutrients matches the previously indicated “total macronutrients”.
Then, compare your proposed meal plan’s total macronutrients with the previously indicated “total macronutrients” target. Don’t decide the amount of the ingredients until you have verified the sum matches the “total macronutrients” target.
Adjust the amounts of ingredients so their proteins, carbs and fats match the “total macronutrients” target.
This meal plan is intended to help people have a meal plan that matches their macronutrient goals, please make sure the meal plan matches the macronutrient target. Use a professional and friendly tone.
Provide the output in table format mentioning the total macronutrients of each meal."
LLMs are good at generating text that sounds human because that’s what it was trained to do. With in context learning we can prompt certain expected structure out of the model.
Using that structure we can prompt the LLM into using “tools” like python functions. This is done in LangChain’s LLM Math.
If you would like to know more I suggest doing the LangChain Short Course and taking a look at LangChain’s website.
It’s important to remember that the gen in Gen AI is short for generative, not general. I agree that the LangChain short course covers how to extend ChatGPT with non-native capabilities such as reading structured data from spreadsheets and evaluating mathematical expressions. But if you’re new to generative AI, I recommend watching at least the first module of the Building Systems with ChatGPT course first. The ‘how it works’ section should make it clearer why LLMs, and ChatGPT-3 in particular, are much more proficient at text manipulation than math. The other modules in Building Systems… introduce the idea of chaining ChatGPT calls to achieve more complex results. It’s a good foundation for then taking the LangChain course. HTH
As the other commenter have pointed out, ChatGPT does not understand maths. It is very good at pretending though.
If you have a GPTplus account, give the Wolfram plug-in a try. Just Google Wolfram plug-in for ChatGPT and take it from there.
WolframAlpha is really strong with anything math-related.
Feel free to ask questions, if anything is unclear!