ChatGpt error

In the “Why finetune” lecture lab, I tried to execute the following statetment:

print(chatgpt(“Tell me how to train my dog to sit”))

But I get the following error. Any feedback?

print(chatgpt(“Tell me how to train my dog to sit”))
print(chatgpt(“Tell me how to train my dog to sit”))
status code: 561

HTTPError Traceback (most recent call last)
File /usr/local/lib/python3.9/site-packages/lamini/api/, in make_web_request(key, url, http_method, json)
24 try:
—> 25 resp.raise_for_status()
26 except requests.exceptions.HTTPError as e:

File /usr/local/lib/python3.9/site-packages/requests/, in Response.raise_for_status(self)
1020 if http_error_msg:
→ 1021 raise HTTPError(http_error_msg, response=self)

HTTPError: 561 Server Error: Unknown Status Code for url: http://jupyter-api-proxy.internal.dlai/rev-proxy/lamini/v1/completions

During handling of the above exception, another exception occurred:

Hi @Ram_Sastry

Are you trying to run locally?

Best regards

@elirod, no, that particular line, I ran directly on the Deeplearning.AI platform.

However, I also download the notebook and tried running on Colab, then after installing lamini( !pip install lamini), and importing everything in the notebook, I get error on this line:

non_finetuned_output = str(non_finetuned(str(“Tell me how to train my dog to sit”)))

Inference exception: can only concatenate str (not “NoneType”) to str

TypeError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/lamini/api/ in generate(self, prompt, model_name, output_type, max_tokens, stop_tokens, max_retries, base_delay, local_cache_file)
52 try:
—> 53 result = self.inference_queue.submit(req_data, local_cache_file)
54 break

Hi Ram_Sastry,

in order to solve this error, you need to do to the following two steps:

  1. create an account on and
  2. go to your account and copy paste your key in colab with the command lamini.api_key = xyz


As for your initial question regarding the status code: 561, I think it is because lumini has the pro subscription service ( I am not sure though), as with the free tier one I got this message: APIError: API error {‘detail’: “Currently this user has support for base models: [‘hf-internal-testing/tiny-random-gpt2’, ‘EleutherAI/pythia-70m’, ‘EleutherAI/pythia-70m-deduped’, ‘EleutherAI/pythia-70m-v0’, ‘EleutherAI/pythia-70m-deduped-v0’, ‘EleutherAI/neox-ckpt-pythia-70m-deduped-v0’, ‘EleutherAI/neox-ckpt-pythia-70m-v1’, ‘EleutherAI/neox-ckpt-pythia-70m-deduped-v1’, ‘EleutherAI/gpt-neo-125m’, ‘EleutherAI/pythia-160m’, ‘EleutherAI/pythia-160m-deduped’, ‘EleutherAI/pythia-160m-deduped-v0’, ‘EleutherAI/neox-ckpt-pythia-70m’, ‘EleutherAI/neox-ckpt-pythia-160m’, ‘EleutherAI/neox-ckpt-pythia-160m-deduped-v1’, ‘EleutherAI/pythia-2.8b’, ‘EleutherAI/pythia-410m’, ‘EleutherAI/pythia-410m-v0’, ‘EleutherAI/pythia-410m-deduped’, ‘EleutherAI/pythia-410m-deduped-v0’, ‘EleutherAI/neox-ckpt-pythia-410m’, ‘EleutherAI/neox-ckpt-pythia-410m-deduped-v1’, ‘cerebras/Cerebras-GPT-111M’, ‘cerebras/Cerebras-GPT-256M’, ‘meta-llama/Llama-2-7b-hf’, ‘meta-llama/Llama-2-7b-chat-hf’, ‘meta-llama/Llama-2-13b-chat-hf’, ‘meta-llama/Llama-2-70b-chat-hf’, ‘Intel/neural-chat-7b-v3-1’, ‘mistralai/Mistral-7B-Instruct-v0.1’, ‘microsoft/phi-2’]. Need help? Email us at”}