In Question and Answer section, on calling the following code,
response = llm.call_as_llm(f"{qdocs} Question: Please list all your \
shirts with sun protection in a table in markdown and summarize each one.")
I get the following error as of August 1, 2023.
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIError: HTTP code 504 from API 504 Gateway Time-out
This is the same error as when calling LangChain Chat with Your Data - Document Loading when calling in [22]
YoutubeAudioLoader APIError: HTTP code 504 from API
I’ve tried this all all times of data. It used to say rate limit exceeded, now just HTTP 504 a lot.
Is there a recommended way to use our key, and if so what kind of charges can we expect?
In your example code snippet
openai = ChatOpenAI(model_name=“gpt-3.5-turbo”, request_timeout=8)
you are instantiating a ChatOpenAI class object with the model name “gpt-3.5-turbo” and a request_timeout of 8 seconds.
This means that when you use this object to make API requests to the GPT-3.5 model, each request will have a maximum of 8 seconds to generate a response before it times out.
To increase the request_timeout, increase it according to your need.
i.e. request_timeout=60 or request_timeout=120
Yes, I read it from your suggestion and looked up where a timeout could be set.
The notebook is the same as each of the following ones, which all result in errors, 2 different courses. Here’s the first which is the one you askeda bout,
I’d be interested to know what you and others are using: Local Jupyter Notebook with VSCode, PyCharm or Google Colab or something else? I’m on macOS.
I’ve been using the deeplearning.ai Notebooks as is so far. Not locally or on Google Colab where I can set OPENAI_API_KEY environment variable. That’s how these Short Courses started. Without the need of using our own key.
I’ve seen messages that the Coursera servers were down then up, but it’s not entirely clear here whether we are to download all local files and set an environment variable locally for or use something like Google Colab.
I’ve noticed LangChain for LLM Application Development has a .csv file OutdoorClothingCatalog_1000.csv – it wasn’t possible to open files locally like a .csv file and save it now it is.
Again, I’d just be interested to know what you and others are using: Local Jupyter Notebook with VSCode, PyChart (I’m on macOS), or Google Colab or something else?
I get the same error inspite of increasing the time out Code llm = ChatOpenAI(temperature = 0.0, request_timeout=120)
Output of
response = llm.call_as_llm(f"{qdocs} Question: Please list all your \
shirts with sun protection in a table in markdown and summarize each one.")
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 1.0 seconds as it raised APIError: HTTP code 504 from API (
504 Gateway Time-out
504 Gateway Time-out
).
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.._completion_with_retry in 2.0 seconds as it raised APIError: HTTP code 504 from API (
504 Gateway Time-out
Are you using the online notebook where OPENAI_API_KEY is stored as local environment variable like in the first Notebook Models, Prompts and Parsers for this short course?
If so, do you see the same rate limit error in that section, [12] get_completion("What is 1+1?")