03_Instruction_tuning_lab_student Faced TypeError problem

non_instruct_model = BasicModelRunner("meta-llama/Llama-2-7b-hf") non_instruct_output = non_instruct_model("Tell me how to train my dog to sit") print("Not instruction-tuned output (Llama 2 Base):", non_instruct_output)

I ran this code and got a TypeError "TypeError: can only concatenate str (not "NoneType") to str Could anybody help me solve this problem?

2 Likes

Please post a screen capture image that shows the entire error message. It should indicate which line threw the error.

Here are the error details:

LAMINI CONFIGURATION
{}
LAMINI CONFIGURATION
{}
LAMINI CONFIGURATION
{}
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[12], line 2
      1 non_instruct_model = BasicModelRunner("meta-llama/Llama-2-7b-hf")
----> 2 non_instruct_output = non_instruct_model("Tell me how to train my dog to sit")
      3 print("Not instruction-tuned output (Llama 2 Base):", non_instruct_output)

File /usr/local/lib/python3.9/site-packages/lamini/runners/base_runner.py:28, in BaseRunner.__call__(self, prompt, system_prompt, output_type, max_tokens)
     21 def __call__(
     22     self,
     23     prompt: Union[str, List[str]],
   (...)
     26     max_tokens: Optional[int] = None,
     27 ):
---> 28     return self.call(prompt, system_prompt, output_type, max_tokens)

File /usr/local/lib/python3.9/site-packages/lamini/runners/base_runner.py:39, in BaseRunner.call(self, prompt, system_prompt, output_type, max_tokens)
     30 def call(
     31     self,
     32     prompt: Union[str, List[str]],
   (...)
     35     max_tokens: Optional[int] = None,
     36 ):
     37     input_objects = self.create_final_prompts(prompt, system_prompt)
---> 39     return self.lamini_api.generate(
     40         prompt=input_objects,
     41         model_name=self.model_name,
     42         max_tokens=max_tokens,
     43         output_type=output_type,
     44     )

File /usr/local/lib/python3.9/site-packages/lamini/api/lamini.py:46, in Lamini.generate(self, prompt, model_name, output_type, max_tokens, stop_tokens)
     31 def generate(
     32     self,
     33     prompt: Union[str, List[str]],
   (...)
     37     stop_tokens: Optional[List[str]] = None,
     38 ):
     39     req_data = self.make_llm_req_map(
     40         prompt=prompt,
     41         model_name=model_name or self.model_name,
   (...)
     44         stop_tokens=stop_tokens,
     45     )
---> 46     result = self.inference_queue.submit(req_data)
     47     if isinstance(prompt, str) and len(result) == 1:
     48         if output_type is None:

File /usr/local/lib/python3.9/site-packages/lamini/api/inference_queue.py:41, in InferenceQueue.submit(self, request)
     39 # Wait for all the results to come back
     40 for result in results:
---> 41     result.result()
     43 # Combine the results and return them
     44 return self.combine_results(results)

File /usr/local/lib/python3.9/concurrent/futures/_base.py:439, in Future.result(self, timeout)
    437     raise CancelledError()
    438 elif self._state == FINISHED:
--> 439     return self.__get_result()
    441 self._condition.wait(timeout)
    443 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File /usr/local/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
    389 if self._exception:
    390     try:
--> 391         raise self._exception
    392     finally:
    393         # Break a reference cycle with the exception in self._exception
    394         self = None

File /usr/local/lib/python3.9/concurrent/futures/thread.py:58, in _WorkItem.run(self)
     55     return
     57 try:
---> 58     result = self.fn(*self.args, **self.kwargs)
     59 except BaseException as exc:
     60     self.future.set_exception(exc)

File /usr/local/lib/python3.9/site-packages/lamini/api/inference_queue.py:103, in process_batch(key, api_prefix, batch)
    101 def process_batch(key, api_prefix, batch):
    102     url = api_prefix + "completions"
--> 103     result = make_web_request(key, url, "post", batch)
    104     return result

File /usr/local/lib/python3.9/site-packages/lamini/api/rest_requests.py:16, in make_web_request(key, url, http_method, json)
     13 def make_web_request(key, url, http_method, json=None):
     14     headers = {
     15         "Content-Type": "application/json",
---> 16         "Authorization": "Bearer " + key,
     17     }
     18     if http_method == "post":
     19         resp = requests.post(url=url, headers=headers, json=json)

TypeError: can only concatenate str (not "NoneType") to str

3 Likes

Same error message for me. Looking forward to a resolution.

1 Like

Add api_key to non_instruct_model = BasicModelRunner("meta-llama/Llama-2-7b-hf") solved the issue :slightly_smiling_face:

1 Like

That doesn’t fix the prolem for me, the error message still appear

I’m having the same issue. I’m getting a 401 error with the token being invalid:
LAMINI CONFIGURATION
{}
LAMINI CONFIGURATION
{}
LAMINI CONFIGURATION
{}
status code: 401

HTTPError Traceback (most recent call last)
File /usr/local/lib/python3.9/site-packages/lamini/api/rest_requests.py:25, in make_web_request(key, url, http_method, json)
24 try:
—> 25 resp.raise_for_status()
26 except requests.exceptions.HTTPError as e:

File /usr/local/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
→ 1021 raise HTTPError(http_error_msg, response=self)

HTTPError: 401 Client Error: Unauthorized for url: https://api.lamini.ai/v1/completions

During handling of the above exception, another exception occurred:

AuthenticationError Traceback (most recent call last)
Cell In[30], line 2
1 non_instruct_model = BasicModelRunner(“meta-llama/Llama-2-7b-hf”, api_key=env_vars[“POWERML__PRODUCTION__KEY”])
----> 2 non_instruct_output = non_instruct_model(“Tell me how to train my dog to sit”)
4 print(“After non_instruct_model”)
5 print(“Not instruction-tuned output (Llama 2 Base):”, non_instruct_output)

File /usr/local/lib/python3.9/site-packages/lamini/runners/base_runner.py:28, in BaseRunner.call(self, prompt, system_prompt, output_type, max_tokens)
21 def call(
22 self,
23 prompt: Union[str, List[str]],
(…)
26 max_tokens: Optional[int] = None,
27 ):
—> 28 return self.call(prompt, system_prompt, output_type, max_tokens)

File /usr/local/lib/python3.9/site-packages/lamini/runners/base_runner.py:39, in BaseRunner.call(self, prompt, system_prompt, output_type, max_tokens)
30 def call(
31 self,
32 prompt: Union[str, List[str]],
(…)
35 max_tokens: Optional[int] = None,
36 ):
37 input_objects = self.create_final_prompts(prompt, system_prompt)
—> 39 return self.lamini_api.generate(
40 prompt=input_objects,
41 model_name=self.model_name,
42 max_tokens=max_tokens,
43 output_type=output_type,
44 )

File /usr/local/lib/python3.9/site-packages/lamini/api/lamini.py:46, in Lamini.generate(self, prompt, model_name, output_type, max_tokens, stop_tokens)
31 def generate(
32 self,
33 prompt: Union[str, List[str]],
(…)
37 stop_tokens: Optional[List[str]] = None,
38 ):
39 req_data = self.make_llm_req_map(
40 prompt=prompt,
41 model_name=model_name or self.model_name,
(…)
44 stop_tokens=stop_tokens,
45 )
—> 46 result = self.inference_queue.submit(req_data)
47 if isinstance(prompt, str) and len(result) == 1:
48 if output_type is None:

File /usr/local/lib/python3.9/site-packages/lamini/api/inference_queue.py:41, in InferenceQueue.submit(self, request)
39 # Wait for all the results to come back
40 for result in results:
—> 41 result.result()
43 # Combine the results and return them
44 return self.combine_results(results)

File /usr/local/lib/python3.9/concurrent/futures/_base.py:446, in Future.result(self, timeout)
444 raise CancelledError()
445 elif self._state == FINISHED:
→ 446 return self.__get_result()
447 else:
448 raise TimeoutError()

File /usr/local/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self)
389 if self._exception:
390 try:
→ 391 raise self._exception
392 finally:
393 # Break a reference cycle with the exception in self._exception
394 self = None

File /usr/local/lib/python3.9/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
—> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)

File /usr/local/lib/python3.9/site-packages/lamini/api/inference_queue.py:103, in process_batch(key, api_prefix, batch)
101 def process_batch(key, api_prefix, batch):
102 url = api_prefix + “completions”
→ 103 result = make_web_request(key, url, “post”, batch)
104 return result

File /usr/local/lib/python3.9/site-packages/lamini/api/rest_requests.py:45, in make_web_request(key, url, http_method, json)
43 except Exception:
44 json_response = {}
—> 45 raise AuthenticationError(
46 json_response.get(“detail”, “AuthenticationError”)
47 )
48 if resp.status_code == 400:
49 try:

AuthenticationError: Invalid token

I am having the same error, would any advise on how to fix it ?

Same error. How to fix? Anyone?

I’m having the same issue.

It sure would be great if someone on the support team responded to this…

Ok, I had a typo in the code that grabs the env vars: they have double underscores, not single. It’s working for me with this:

import os
import lamini

lamini.api_url = os.getenv("POWERML__PRODUCTION__URL")
lamini.api_key = os.getenv("POWERML__PRODUCTION__KEY")
non_instruct_model = BasicModelRunner("meta-llama/Llama-2-7b-hf", 
                                      api_key=lamini.api_key)
2 Likes

This is part of 01_Why_finetuning_lab_student

We all are facing problem with 03_Instruction_tuning_lab_student.

Mentors Please Help with this. @Deepti_Prasad @jyadav202

Today is 08/10/24 and worked for me!

@mubsi, there may be an issue in this short course which needs attention.

This has been reported to the team.

1 Like

Hi all,

All the notebooks of this course have been fixed.

Best,
Mubsi