2 Minor Issues in an otherwise amazing course :-)

Hi there,

In my humble opinion, this was the best short course so far.

Just came across two minor things, and I’m curious about the best way to iron them out:

  1. I tried using gpt-4o but there was some kind of tokeniser issue. Anyone come across this or know how to stop it from happening?

  2. The final lesson worked great, except that the generated CVs wouldn’t display due to an issue with UTF-8 - a ‘UnicodeDecodeError’. Is there a way to fix this too?

Hi @Martin_Pollard

as the get-4o is multimodal lmm model and the crewAI is more language based model, this could have created some issue. Can you share the tokenizer issue you encountered here with a screenshot!!!

Can you share a screenshot of the error here!!! did you encounter this error while running the course provided platform or your local environment??

n I agree it is really a great course!!!

Regards
DP

Hello Deepti,
This is the error related to gpt-4o. Hope this helps
2024-05-20 06:12:36,939 - 8474557120 - manager.py-manager:282 - WARNING: Error in TokenCalcHandler.on_llm_start callback: KeyError(‘Could not automatically map gpt-4o to a tokeniser. Please use tiktoken.get_encoding to explicitly get the tokeniser you expect.’)

I don’t think gpt-4o will work on this as it is more text based model and hence creating this tokenizer issue

1 Like

Hi @Martin_Pollard,

For (1), as Deepti already mentioned, and as you may already understand, the gpt-4o was only recently launched, so it might not yet be compatible with the current libraries of crewAI. Maybe in the future.

For (2), are you getting this error when running on the platform or locally ?

Best,
Mubsi

ALSO see the - manager.py-manager:282 - WARNING: Error in TokenCalcHandler.on_llm_start callback: KeyError(‘Could not automatically map gpt-4o to a tokeniser. Please use tiktoken.get_encoding to explicitly get the tokeniser you expect.’) error while using the gpt-4o in the L7 lesson. There are many youtubers posting positive results with CrewAI and gpt-4o - So i do not think its a 4o issue per say. It appears to be related on L7 the tokens sent and maybe breaking up the tokens is needed. WIll try that next when i get a chance on L7

Several YouTubers posting great results perfomance/cost using CrewAI w/ gpt-4o FYI

Hi Mubsi,

It’s locally, but I’ve managed to fix it.

I’ll post the solution below in case there is a better way to do it.

1 Like

Hi all,

I managed to resolve the second issue with the following code while running locally:

import chardet
from IPython.display import Markdown, display

with open('./tailored_resume.md', 'rb') as f:
    raw_data = f.read(10000)  # read the first 10,000 bytes or less

result = chardet.detect(raw_data)
encoding = result['encoding']
print(f"The detected encoding is: {encoding}")

with open('./tailored_resume.md', 'r', encoding=encoding) as f:
    content = f.read()

display(Markdown(content))

Let me know if there is a better way to do it.

In the meantime, with the first issue, I had the same error message as posted by cwijayasundara. Just wondering though, and in light of art181’s posts… this clue below - perhaps there is a hint? Not sure how to implement it though.

Please use `tiktoken.get_encoding` to explicitly get the tokeniser you expect.’)

Edit: Might have possibly figured this out, but there is a piece of info I can’t find - does anyone know what the tokenizer name is for gpt-4o?

Got it to work with gpt-4o !

i added:

import tiktoken

after all of the imports

#I added the specific encoding line after all of the imports ( but i dont think it is needed)
encoding = tiktoken.get_encoding(“cl100k_base”)
I pip installed tiktoken of course but didnt work with the 1ts version. So it did >pip install --upgrade tiktoken ( this pulled v 0.7.0 on my mac and
It complained about langchain versions dependencies after the install, but RAN WELL - no errors and cheaper the gpt-4-turbo run time.

1 Like

Hmm… that line didn’t work on my end. Maybe I used it incorrectly?

The output looks like it worked right, mind you.

Martin: actually it ran without errors for me and produced a much better and comprehensive interview_materials.md, …but spoke too soon last night, when i looked at tailored_resume.md this morning, it only had a single title line with no content. Ran again against gpt-4o and the same results. Brandon a youtuber that post about CrewAI a lot has a video on the 3 standard CrewAI use cases: https://www.youtube.com/watch?v=Z_KB91zbG3c and also a github with his code which i ran ok with gpt-4o doing the changes above- not sure what we are missing. But is possible. I think it is related to a llib. dependencies and versions which can be a bit of a nightmare to manage in Python. Hope we can figure it out.

Martin: I modified the resume_strategy_task a bit, now am getting both output files from then Job_application Crew using gpt-4o. You can DM me if you want to compare notes/environments. Am using a Mac (Sonoma 14.5) using Python 3.11

Thanks for the offer, art - I’ll reach out in a moment.

I agree with your statement. I am currently using ChatGPT 3.5 and have found it to be very efficient, as you stated. Thank you for your answer.

Alright!

The latest CrewAI release from 12/08/24 fixed the GPT-4o issue!

Thanks for the discussion, everyone.

2 Likes