Prompt Engineering with Llama 2&3: supplied notebook code error

Hi
I’ve just enrolled on this short course and trying to follow along with lesson 2 which contains a pre-prepared notebook. However the first call to the LLM produces an error.

pass prompt to the llama function, store output as ‘response’ then print

response = llama(prompt)
print(response)

{‘error’: {‘message’: ‘Accessing inference is invalid’, ‘type’: None, ‘param’: None, ‘code’: None}}

Can anyone help me?
Many thanks

I ended up creating my own API key on together.ai and updating the url in the llama function in utils.py (the helper function) to url=‘https://api.together.xyz/inference’.

I believe the Short Courses are currently being updated to fix issues caused by the new versions of the LMM tools.

Here is the simplest hack/work around i have found just Click on File>Open>Got to utils.py file>Change the model in the llama function or any other similar functions from “togethercomputer/llama-2-7b-chat”-> to → “meta-llama/Llama-2-7b-chat-hf” or any other model as probably names how the models are called from the code might have been changed now.

make sure you save the code in utils.py and then restart the kernel in the main jupyter notebook file.
This work-around is working atleast tilll L3 rest i will update till when it works.

The new model name i gave is nothing but copied from the list of models on together.ai model is same LlaMA 2 7B Chat.

subsequently as i am on L4 it is working fine where there is need to use LLaMA 2 Chat 70B Model explicitly in the parameter of llama function, just visit the website Together.AI and select this model copy it’s model key/name whatever it is called paste it to replace the old name.