Is this course broken?

I’m new to the site. Trying to run the first activity in the built-in browser-based Jupyter Notebook I get this error on the third command. Is there something i’m missing? Am I supposed to set this up in an independent environment?

---------------------------------------------------------------------------
InvalidRequestError                       Traceback (most recent call last)
Cell In[4], line 6
      2 from helper import get_together_api_key,load_env 
      4 client = Together(api_key=get_together_api_key())
----> 6 output = client.chat.completions.create(
      7     model="meta-llama/Llama-3-70b-chat-hf",
      8     messages=[
      9         {"role": "system", "content": system_prompt},
     10         {"role": "user", "content": world_prompt}
     11     ],
     12 )

File /usr/local/lib/python3.11/site-packages/together/resources/chat/completions.py:136, in ChatCompletions.create(self, messages, model, max_tokens, stop, temperature, top_p, top_k, repetition_penalty, presence_penalty, frequency_penalty, min_p, logit_bias, stream, logprobs, echo, n, safety_model, response_format, tools, tool_choice)
    109 requestor = api_requestor.APIRequestor(
    110     client=self._client,
    111 )
    113 parameter_payload = ChatCompletionRequest(
    114     model=model,
    115     messages=messages,
   (...)
    133     tool_choice=tool_choice,
    134 ).model_dump()
--> 136 response, _, _ = requestor.request(
    137     options=TogetherRequest(
    138         method="POST",
    139         url="chat/completions",
    140         params=parameter_payload,
    141     ),
    142     stream=stream,
    143 )
    145 if stream:
    146     # must be an iterator
    147     assert not isinstance(response, TogetherResponse)

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:249, in APIRequestor.request(self, options, stream, remaining_retries, request_timeout)
    231 def request(
    232     self,
    233     options: TogetherRequest,
   (...)
    240     str | None,
    241 ]:
    242     result = self.request_raw(
    243         options=options,
    244         remaining_retries=remaining_retries or self.retries,
    245         stream=stream,
    246         request_timeout=request_timeout,
    247     )
--> 249     resp, got_stream = self._interpret_response(result, stream)
    250     return resp, got_stream, self.api_key

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:620, in APIRequestor._interpret_response(self, result, stream)
    612     return (
    613         self._interpret_response_line(
    614             line, result.status_code, result.headers, stream=True
    615         )
    616         for line in parse_stream(result.iter_lines())
    617     ), True
    618 else:
    619     return (
--> 620         self._interpret_response_line(
    621             result.content.decode("utf-8"),
    622             result.status_code,
    623             result.headers,
    624             stream=False,
    625         ),
    626         False,
    627     )

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:689, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
    687 # Handle streaming errors
    688 if not 200 <= rcode < 300:
--> 689     raise self.handle_error_response(resp, rcode, stream_error=stream)
    690 return resp

InvalidRequestError: Error code: 400 - {"message": "\"prompt: Required\" OR \"tools: Expected array, received null\", \"tool_choice: Expected object, received null\"", "type_": "invalid_request_error", "param": null, "code": null}
1 Like

I get the same error.

@mubsi, maybe this short course needs an update to a different llama version?

Thanks @TMosh, I’ll get this checked.

1 Like

I was able to resolve the same issue. Here’s what I did:
The Together server was updated recently and it rejects the “null” values passed to “tools” and “tool_choice”. We can pass tools as empty list and have to pass tool_choice as auto.
Use:
output = client.chat.completions.create(
model=“meta-llama/Llama-3-70b-chat-hf”,
messages=[
{“role”: “system”, “content”: system_prompt},
{“role”: “user”, “content”: world_prompt}
],
tools=,
tool_choice=“auto”)

The reason for using auto for tool_choice is because the updated version rejects null. So, auto (non-null value) is a valid enum value being passed to the server.
The reason for using tools= is because it is a valid value indicating that no tools are being used.
Hope this helps!

4 Likes

Thanks this worked for me, and those are square brackets on tools=[] for others following along

1 Like

I found a new bug, I think.

Please, if you can be of any help:
Another bug - APIError: Error code: 422

1 Like

Thank you, this worked for me also

HI Mubsi

There is also error on embeding tutorial using google vertix ai it throws Understanding and Applying Text Embeddings - DeepLearning.AI. The tutorial throws error when I run the cell

embedding = embedding_model.get_embeddings(
[“life”])

Error in the L1-Embeddings-api-intro

Getting Started With Text Embeddings

hi @Anuj_sharma3

please post screenshot of the error here.

also kindly create always new topic.

Hi @Deepti_Prasad

Please find the screenshot. I had also given the feedback on the course with full exception copied to it.

Anuj

i need to see the complete error. if the error log is lengthy, then take two separate screenshots and post it here.

where did you give feedback at the end of course page?

You could always post here such feedback by selecting Learner’s Feedback category.

I have attached the error in text.

Also I have attached the sceen-shots

  • I gave copied the exception thrown in the course feedback—----------------------------------------------------------------------------
    JSONDecodeError Traceback (most recent call last)
    File /usr/local/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
    - → 971 return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973 # Catch JSON-related errors and raise as requests.JSONDecodeError
    974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

    File /usr/local/lib/python3.10/json/_init_.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344 parse_int is None and parse_float is None and
    345 parse_constant is None and object_pairs_hook is None and not kw):
    - → 346 return _default_decoder.decode(s)
    347 if cls is None:

    File /usr/local/lib/python3.10/json/decoder.py:340, in JSONDecoder.decode(self, s, _w)
    339 if end != len(s):
    - → 340 raise JSONDecodeError(“Extra data”, s, end)
    341 return obj

    JSONDecodeError: Extra data: line 1 column 5 (char 4)

    During handling of the above exception, another exception occurred:

    JSONDecodeError Traceback (most recent call last)
    Cell In[7], line 1
    -—> 1 embedding = embedding_model.get_embeddings(
    2 [“life”])

    File /usr/local/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:508, in TextEmbeddingModel.get_embeddings(self, texts)
    494 def get_embeddings(self, texts: List[str]) → List[“TextEmbedding”]:
    495 # instances = [{“content”: str(text)} for text in texts]
    496 #
    (…)
    506 # for prediction in prediction_response.predictions
    507 # ]
    - → 508 prediction_response = self._dlai_custom_api(texts)
    509 return [
    510 TextEmbedding(
    511 values=prediction
    512 )
    513 for prediction in prediction_response
    514 ]

    File /usr/local/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:532, in TextEmbeddingModel._dlai_custom_api(self, texts)
    527 headers = {
    528 ‘Content-Type’: ‘application/json’,
    529 ‘Authorization’: f’Bearer {API_KEY}’
    530 }
    531 response = requests.request(“POST”, url, headers=headers, data=json.dumps(payload))
    - → 532 return response.json()

    File /usr/local/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971 return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973 # Catch JSON-related errors and raise as requests.JSONDecodeError
    974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
    - → 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

    JSONDecodeError: Extra data: line 1 column 5 (char 4)

Hey, thanks for the fix, I really appreciate it!

hi @Mubsi

can you please check into anuj’s error issue.

thankyou
DP

Hi all, sorry I forgot to leave a reply. I did forward this to the team.

The model which this lab is using is no longer available (deprecated by Google).

An updated notebook is underway.

1 Like

Thanks for reporting! this is due the embedded model is deprecated now! Currently, we’re updating the models (and also validating with latest updates from Gemini) and we’ll update the notebooks!