FINE TUNING LLMs - how to fix error- "Invalid token error"

I am getting I am trying to run the notebook in the “Why finetune” section of the course.
Any idea on how I can resolve this?

HTTPError                                 Traceback (most recent call last)
File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/util/, in powerml_send_query_to_url(params, route)
    131     response =
    132         url=url + route, headers=headers, json=params, timeout=200
    133     )
--> 134     response.raise_for_status()
    135 except requests.exceptions.Timeout:

File ~/opt/anaconda3/lib/python3.9/site-packages/requests/, in Response.raise_for_status(self)
   1020 if http_error_msg:
-> 1021     raise HTTPError(http_error_msg, response=self)

HTTPError: 401 Client Error: Unauthorized for url:

During handling of the above exception, another exception occurred:

AuthenticationError                       Traceback (most recent call last)
Cell In[17], line 1
----> 1 non_finetuned_output = non_finetuned("Tell me how to train my dog to sit")

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/runners/, in BasicModelRunner.__call__(self, inputs)
     47 else:
     48     # Singleton
     49     input_objects = Input(input=inputs)
---> 50 output_objects = self.llm(
     51     input=input_objects,
     52     output_type=Output,
     53     model_name=self.model_name,
     54     enable_peft=self.enable_peft,
     55 )
     56 if isinstance(output_objects, list):
     57     outputs = [o.output for o in output_objects]

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/, in Builder.__call__(self, input, output_type, *args, **kwargs)
     83 else:
     84     value = self.add_model(input, output_type, *args, **kwargs)
---> 85     result = gen_value(value)
     86     return result

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/util/, in gen_value(value)
    202 def gen_value(value: Value):
--> 203     value._compute_value()
    204     return value._data

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/, in Value._compute_value(self)
     60 else:
     61     params = {
     62         "program": self._function.program.to_dict(),
     63         "requested_values": [self._index],
     64     }
---> 65     response = query_run_program(params)
     67     response.raise_for_status()
     69     # update the cache

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/util/, in query_run_program(params)
     10 def query_run_program(params):
---> 11     resp = powerml_send_query_to_url(params, "/v1/llama/run_program")
     12     return resp

File ~/opt/anaconda3/lib/python3.9/site-packages/llama/program/util/, in powerml_send_query_to_url(params, route)
    157     except Exception:
    158         json_response = {}
--> 159     raise llama.error.AuthenticationError(
    160         json_response.get("detail", "AuthenticationError")
    161     )
    162 if response.status_code == 400:
    163     try:

AuthenticationError: Invalid token

Are you executing this cell on the lab environment or on your personal computer?

lab environment.

I’m able to run the lab.
Please reply with the output of :

import os

I’m looking for the following keys:


I couldn’t find them in the output:

environ({'SHELL': '/bin/zsh', 'TMPDIR': '/var/folders/vm/cqtn78px0w38lrlkwm52p3nc0000gn/T/', 'CONDA_SHLVL': '1', 'CONDA_PROMPT_MODIFIER': '(base) ', 'LC_ALL': 'en_US.UTF-8', 'USER': 'user', 'COMMAND_MODE': 'unix2003', 'CONDA_EXE': '/Users/user/opt/anaconda3/bin/conda', 'SSH_AUTH_SOCK': '/private/tmp/', '__CF_USER_TEXT_ENCODING': '0x1F5:0:2', '_CE_CONDA': '', 'CONDA_ROOT': '/Users/user/opt/anaconda3', 'PATH': '/Users/user/opt/anaconda3/bin:/Users/user/opt/anaconda3/condabin:/usr/bin:/bin:/usr/sbin:/sbin', 'LaunchInstanceID': 'AEE46CD4-2652-443E-B7A0-857DF36F671C', 'CONDA_PREFIX': '/Users/user/opt/anaconda3', '__CFBundleIdentifier': '', 'PWD': '/Users/user', 'LANG': 'en_US.UTF-8', 'EVENT_NOKQUEUE': '1', 'XPC_FLAGS': '0x0', '_CE_M': '', 'XPC_SERVICE_NAME': '0', 'HOME': '/Users/user', 'SHLVL': '2', 'CONDA_PYTHON_EXE': '/Users/user/opt/anaconda3/bin/python', 'LOGNAME': 'user', 'LC_CTYPE': 'UTF-8', 'CONDA_DEFAULT_ENV': 'base', 'SECURITYSESSIONID': '186a5', '_': '/Users/user/opt/anaconda3/bin/jupyter', 'PYDEVD_USE_FRAME_EVAL': 'NO', 'JPY_SESSION_NAME': 'Courses - Deep Learning/ea619c77-14ce-45b9-b099-3b6a2c7fed8b', 'JPY_PARENT_PID': '10448', 'TERM': 'xterm-color', 'CLICOLOR': '1', 'FORCE_COLOR': '1', 'CLICOLOR_FORCE': '1', 'PAGER': 'cat', 'GIT_PAGER': 'cat', 'MPLBACKEND': 'module://matplotlib_inline.backend_inline'})

Thanks for the information.
Your issue has been reported to the staff.

Can you try reloading the lab, restarting the kernel?
I just loaded the lab and it ran ok, environmental variables are in place… so, it may have been a transitory issue…?

1 Like

I got the same question on my local environment. If the model can’t deploy in local environment, then this class is not practical in some way.

That’s untrue. If you want to deploy locally, please look at the package docs on how to get the API key.

1 Like

Thx for your notice. But where to find the package docs? I can’t find it. Do you have a link to it?

Please start here

1 Like

I think it was. It’s working now.

Thank you for your responses

I’m receiving the same error when I want to run the code locally.