Hello!
I’m a complete beginner in the field of deep learning, so please don’t judge too harshly. I have two questions and i will be grateful if you can help me understand them.
I used several open LLM models giving them the prompt: “Let’s make five existing words from the letters of the word “Incomprehensibilities”, without reusing the letters and without additional letters:”, but none of them could generate the answer correctly. Are LLMs am… not designed for this kind of task or can they be trained? What kind of training would be sufficient?
2 And the second question, which one neural network architecture is best for playing tic-tac-toe.
I am sorry but the prompt screenshot you gave is not readable here. All I can see you are using llama-2 models but how you using the multi model session is still unclear.
Did you try these prompts on gpt based model? do you get a similar response on that too?