Map a Problem to an LLM Model?

How can we identify if a problem can/cannot be solved by an LLM model?
(LLM model: Question-answering, Information Extraction, Conversation, reasoning, etc., or a combination of them).
Thank you in advance.

1 Like

there are a lot of things to consider for LLM and finding a proper model, architecture, sizing, quantization, and training approach. I see the following on the top:

  1. privacy / who owns the model e.g. OpenAI vs local instance. There are data privacy use cases where you don’t want to the data to leak .
  2. what biases are in the existing training data and how it will impact your use case. Will you trust to act on the output of the model ( what if ti hallucinates), what is your confidence to give the otput to your CEO for important meeting ?
  3. resourcing required to train, run and maintain the model: engineers ( phds) and compute

I joined this course to improve my understanding and learn on how to approach your model question :slight_smile:

The best way would be to try out an LLM (after you have chosen the LLM according to Gen AI application guide) on a bunch of examples. If you want to get a more holistic view, you can ask ChatGPT to generate more examples and then feed them to the model.

Please let me know if you find a better alternative. :slight_smile:

Thank you Tanmay very much.
What I did was I asked ChatGPT if my problem is a GAI LLM problem and if ChatGPT could solve it and why. The answer was yes and the reasoning was very convincing which encouraged me to break my problem into smaller tasks. I then played with a toy project and the results so far are encouraging.
It should be noted that there is a dependency between these small tasks that I ignored to simplify solving the problem. However, to solve the problem the way humans solve it in real scenarios, dependencies can not be ignored.
So, the challenge I need to figure out next is how to put all the small tasks together in one prompt to solve the problem the way humans solve it in real life. Putting the small tasks together in one prompt makes the size of the prompt too big to be handled by GPT’s current Context Window (memory) size, which is 16k as of now.

I think I understand your problem. I, too, had a similar problem with my use case. I solved my problem in an AutoGPT-style. I let the model make the decision and then let it route to each specific task.

Pros:-

  1. It showed correct results most of the time. The rest of the time, it hallucinated/did not provide a relevant answer in the final step.
  2. You might use different LLMs/normal Python functions in these Routes when the model makes a decision. (Check the example of connecting a calculator with GPT-3.5)

Cons:-

  1. It becomes a bit slow when you are routing to different chains of LLMs, especially in the case of GPT-3.5/GPT-4.
  2. If you fine-tune, it might cost you more, which is dependent on your willingness. (For me, it was very costly to fine-tune all sub-tasks.)

You may also check the Router Chain[1] code in Langchain to get a better idea. Please let me know if you find a better solution, it would help me understand better. :slight_smile:
Many thanks,
Tanmay Juneja

Links:

  1. Router | 🦜️🔗 Langchain