Lesson 4 - JSONDecodeError

When running cell 13:

response = deployed_model.predict(PROMPT)

I get the following error:

---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File /usr/local/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File /usr/local/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File /usr/local/lib/python3.10/json/decoder.py:340, in JSONDecoder.decode(self, s, _w)
    339 if end != len(s):
--> 340     raise JSONDecodeError("Extra data", s, end)
    341 return obj

JSONDecodeError: Extra data: line 1 column 5 (char 4)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[13], line 3
      1 ### depending on the latency of your prompt
      2 ### it can take some time to load
----> 3 response = deployed_model.predict(PROMPT)

File /usr/local/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:426, in TextGenerationModel.predict(self, prompt, max_output_tokens, temperature, top_k, top_p)
    404 def predict(
    405         self,
    406         prompt: str,
   (...)
    411         top_p: float = _DEFAULT_TOP_P,
    412 ) -> "MultiCandidateTextGenerationResponse":
    413     """Gets model response for a single prompt.
    414 
    415     Args:
   (...)
    423         A `TextGenerationResponse` object that contains the text produced by the model.
    424     """
--> 426     return self._batch_predict(
    427         prompts=[prompt],
    428         max_output_tokens=max_output_tokens,
    429         temperature=temperature,
    430         top_k=top_k,
    431         top_p=top_p,
    432     )

File /usr/local/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:516, in TextGenerationModel._batch_predict(self, prompts, max_output_tokens, temperature, top_k, top_p)
    483 """Gets model response for a single prompt.
    484 
    485 Args:
   (...)
    493     A list of `TextGenerationResponse` objects that contain the texts produced by the model.
    494 """
    495 # instances = [{"content": str(prompt)} for prompt in prompts]
    496 # prediction_parameters = {
    497 #     "temperature": temperature,
   (...)
    513 #     for prediction in prediction_response.predictions
    514 # ]
--> 516 json_prediction_response = self._dlai_custom_api(prompts[0], temperature, top_p, top_k, max_output_tokens)
    517 prediction_response = _build_dict_to_prediction(json_prediction_response["prediction_response"][0])
    518 # return [
    519 #     TextGenerationResponse(
    520 #         text=prediction_response["text"],
    521 #         _prediction_response=prediction_response[""],
    522 #     )
    523 # ]

File /usr/local/lib/python3.10/site-packages/vertexai/language_models/_language_models.py:461, in TextGenerationModel._dlai_custom_api(self, prompt, temperature, top_p, top_k, max_output_tokens)
    456 headers = {
    457     'Content-Type': 'application/json',
    458     'Authorization': f'Bearer {API_KEY}'
    459 }
    460 response = requests.request("POST", url, headers=headers, data=json.dumps(payload))
--> 461 return response.json()

File /usr/local/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Extra data: line 1 column 5 (char 4)
3 Likes

Try changing the deployed model, some of the old models may not work as intended.

getting the same error. @aledesa did you find the solution?

Hi,

No, I havent’t tried anymore.

Good luck.

I was sort of thinking that was a designed bug. The JSON string to be decoded is wrapped in non JSON characters. So, consider channging the prompt so as to ensure that the JSON string is not wrapped with anything.