Is this course broken?

I’m new to the site. Trying to run the first activity in the built-in browser-based Jupyter Notebook I get this error on the third command. Is there something i’m missing? Am I supposed to set this up in an independent environment?

---------------------------------------------------------------------------
InvalidRequestError                       Traceback (most recent call last)
Cell In[4], line 6
      2 from helper import get_together_api_key,load_env 
      4 client = Together(api_key=get_together_api_key())
----> 6 output = client.chat.completions.create(
      7     model="meta-llama/Llama-3-70b-chat-hf",
      8     messages=[
      9         {"role": "system", "content": system_prompt},
     10         {"role": "user", "content": world_prompt}
     11     ],
     12 )

File /usr/local/lib/python3.11/site-packages/together/resources/chat/completions.py:136, in ChatCompletions.create(self, messages, model, max_tokens, stop, temperature, top_p, top_k, repetition_penalty, presence_penalty, frequency_penalty, min_p, logit_bias, stream, logprobs, echo, n, safety_model, response_format, tools, tool_choice)
    109 requestor = api_requestor.APIRequestor(
    110     client=self._client,
    111 )
    113 parameter_payload = ChatCompletionRequest(
    114     model=model,
    115     messages=messages,
   (...)
    133     tool_choice=tool_choice,
    134 ).model_dump()
--> 136 response, _, _ = requestor.request(
    137     options=TogetherRequest(
    138         method="POST",
    139         url="chat/completions",
    140         params=parameter_payload,
    141     ),
    142     stream=stream,
    143 )
    145 if stream:
    146     # must be an iterator
    147     assert not isinstance(response, TogetherResponse)

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:249, in APIRequestor.request(self, options, stream, remaining_retries, request_timeout)
    231 def request(
    232     self,
    233     options: TogetherRequest,
   (...)
    240     str | None,
    241 ]:
    242     result = self.request_raw(
    243         options=options,
    244         remaining_retries=remaining_retries or self.retries,
    245         stream=stream,
    246         request_timeout=request_timeout,
    247     )
--> 249     resp, got_stream = self._interpret_response(result, stream)
    250     return resp, got_stream, self.api_key

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:620, in APIRequestor._interpret_response(self, result, stream)
    612     return (
    613         self._interpret_response_line(
    614             line, result.status_code, result.headers, stream=True
    615         )
    616         for line in parse_stream(result.iter_lines())
    617     ), True
    618 else:
    619     return (
--> 620         self._interpret_response_line(
    621             result.content.decode("utf-8"),
    622             result.status_code,
    623             result.headers,
    624             stream=False,
    625         ),
    626         False,
    627     )

File /usr/local/lib/python3.11/site-packages/together/abstract/api_requestor.py:689, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
    687 # Handle streaming errors
    688 if not 200 <= rcode < 300:
--> 689     raise self.handle_error_response(resp, rcode, stream_error=stream)
    690 return resp

InvalidRequestError: Error code: 400 - {"message": "\"prompt: Required\" OR \"tools: Expected array, received null\", \"tool_choice: Expected object, received null\"", "type_": "invalid_request_error", "param": null, "code": null}
1 Like

I get the same error.

@mubsi, maybe this short course needs an update to a different llama version?

Thanks @TMosh, I’ll get this checked.

1 Like

I was able to resolve the same issue. Here’s what I did:
The Together server was updated recently and it rejects the “null” values passed to “tools” and “tool_choice”. We can pass tools as empty list and have to pass tool_choice as auto.
Use:
output = client.chat.completions.create(
model=“meta-llama/Llama-3-70b-chat-hf”,
messages=[
{“role”: “system”, “content”: system_prompt},
{“role”: “user”, “content”: world_prompt}
],
tools=,
tool_choice=“auto”)

The reason for using auto for tool_choice is because the updated version rejects null. So, auto (non-null value) is a valid enum value being passed to the server.
The reason for using tools= is because it is a valid value indicating that no tools are being used.
Hope this helps!

2 Likes

Thanks this worked for me, and those are square brackets on tools=[] for others following along

I found a new bug, I think.

Please, if you can be of any help:
Another bug - APIError: Error code: 422