Using groq with memory=True

hi,
I am using groq llama3 70b as llm and the following code asks for openai api key. I tried everything and only deleting memory=True from the crew actually worked.
Why is that?
and what advantage does this memory parameter brings?

crew = Crew(
agents=[support_agent, support_quality_assurance_agent],
tasks=[inquiry_resolution, quality_assurance_review],
verbose=2,
memory=True
)
inputs = {
“customer”: “DeepLearningAI”,
“person”: “Andrew Ng”,
“inquiry”: "I need help with setting up a Crew "
"and kicking it off, specifically "
"how can I add memory to my crew? "
“Can you provide guidance?”
}
result = crew.kickoff(inputs=inputs)

Hi @mehmet_baki_deniz,

If you go through the lecture video where memory=True is first used, the instructor talks about in detail about the significance and advantages of having memory.

Best,
Mubsi

hi mubsi,
unfortunately he does not talk about it. he just says that we can easily use memory(short medium and long) with crewai by setting this parameter.

I believe it is in the beginning of that lecture, or the lecture before where he introduces those 6 key elements using the slides.

1 Like

Thank you. I will have a look at that

1 Like

Yes… i observed the same with HF. We have to comment out memory = True. But why so?
I understand the concept of memory, but why do we have to comment it out in order for the code to work?

I had the same problem running the code with a local llm, the error comes from the fact that memory shirt team uses embedding (are a way to represent objects,words, or concepts , in other words , helps read the content) and this embedding functions are based on openai and requires API key, so if you are not using the open AI key, will get the error. If you comment the Memory parameter, the agent will forget some of the content and the output can be just some Thoughts.