Dont have maximum tokens length 4097 in Lesson Gestting started with Llama2

Hi, im trying the course in local with llama2 model
( ollama run llama2)

But if i excute the part of the max_tokens limit 4097, in the videos it gives error, but
i dont have… Is there any reason ? Local run should be the same as together.ai made in the course no ? Any explanation why i dont have this max tokens limit when using the same model and prompt ?

and these

Thanks

2 Likes

Hey @egutierrez [quote=“egutierrez, post:1, topic:592543, full:true”]
Hi, im trying the course in local with llama2 model
( ollama run llama2)

But if i excute the part of the max_tokens limit 4097, in the videos it gives error, but
i dont have… Is there any reason ? Local run should be the same as together.ai made in the course no ? Any explanation why i dont have this max tokens limit when using the same model and prompt ?

and these

It seems you have hard-coded the text,
Some things that may be worth checking

from langchain_community.llms import Ollama

class Config:
 def __init__(self, temperature, max_tokens, frequency_penalty):
 self.temperature = temperature
 self.max_tokens= max_tokens
 self.frequency_penalty= frequency_penalty

llm = Ollama(model="llama2:13b-chat-q4_K_M")
prompt_config = Config(0.9, 64, 0.5)

# Something like this
print(llm.invoke(prompt="Hello what is your name?"), config=prompt_config)

Thanks for asking :smiley: