OLLAMA and L3: Multi-agent Customer Support Automation

I am trying to run all notebooks with OLLAMA, L2 is working fine as below:

Import Ollama module from Langchain

from langchain_community.llms import Ollama

Initialize an instance of the Ollama model

llm = Ollama(model=“llama3”)
os.environ[“OPENAI_API_KEY”] = “NA”
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=True
)
result = crew.kickoff()
print(result)
This works perfectly with L2.
But with L3 example,
crew = Crew(
agents=[support_agent, support_quality_assurance_agent],
tasks=[inquiry_resolution, quality_assurance_review],
verbose=2,
memory=True
)
inputs = {
“customer”: “DeepLearningAI”,
“person”: “Andrew Ng”,
“inquiry”: "I need help with setting up a Crew "
"and kicking it off, specifically "
"how can I add memory to my crew? "
“Can you provide guidance?”
}
result = crew.kickoff(inputs=inputs)
I am getting error of “AuthenticationError: Error code: 401 - {‘error’: {‘message’: ‘Incorrect API key provided: NA. You can find your API key at openai/account/api-keys.’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: ‘invalid_api_key’}}”
L2 is working perfect with OLLAMA, but L3 is giving me this error. any idea please share!

I have come to know through other github posts that it usually happening due to memory=True option, as embedder look for embeddings from openAI. the link is below " Passing memory=True reaches out to Open AI, even when running locally with Ollama #447"

the solution is hereat crewai core-concepts/Memory/#using-openai-embeddings-already-default , we can change embedder. I did as github issue mentioned as : crew = Crew(
agents=[support_agent, support_quality_assurance_agent],
tasks=[inquiry_resolution, quality_assurance_review],
verbose=2,
memory=True,
embedder={
“provider”: “huggingface”,
“config”: {
“model”: “mixedbread-ai/mxbai-embed-large-v1”, # mixedbread-ai/mxbai-embed-large-v1 · Hugging Face
}
}
)