L3-Chains cell 74 fails

Hi there,

I am currently doing the short course on LangChain and work through the videos including the Jupyter notebooks.

When running cell 74 and the following, I get the following error:

---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:80, in RouterOutputParser.parse(self, text)
     79 expected_keys = ["destination", "next_inputs"]
---> 80 parsed = parse_json_markdown(text, expected_keys)
     81 if not isinstance(parsed["destination"], str):

File /usr/local/lib/python3.9/site-packages/langchain/output_parsers/structured.py:27, in parse_json_markdown(text, expected_keys)
     26 if "```json" not in text:
---> 27     raise OutputParserException(
     28         f"Got invalid return object. Expected markdown code snippet with JSON "
     29         f"object, but got:\n{text}"
     30     )
     32 json_string = text.split("```json")[1].strip().strip("```").strip()

OutputParserException: Got invalid return object. Expected markdown code snippet with JSON object, but got:
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}

During handling of the above exception, another exception occurred:

OutputParserException                     Traceback (most recent call last)
Cell In[74], line 1
----> 1 chain.run("What is black body radiation?")

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs)
    234     if len(args) != 1:
    235         raise ValueError("`run` supports only one positional argument.")
--> 236     return self(args[0], callbacks=callbacks)[self.output_keys[0]]
    238 if kwargs and not args:
    239     return self(kwargs, callbacks=callbacks)[self.output_keys[0]]

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:72, in MultiRouteChain._call(self, inputs, run_manager)
     70 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     71 callbacks = _run_manager.get_child()
---> 72 route = self.router_chain.route(inputs, callbacks=callbacks)
     74 _run_manager.on_text(
     75     str(route.destination) + ": " + str(route.next_inputs), verbose=self.verbose
     76 )
     77 if not route.destination:

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:26, in RouterChain.route(self, inputs, callbacks)
     25 def route(self, inputs: Dict[str, Any], callbacks: Callbacks = None) -> Route:
---> 26     result = self(inputs, callbacks=callbacks)
     27     return Route(result["destination"], result["next_inputs"])

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:57, in LLMRouterChain._call(self, inputs, run_manager)
     53 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     54 callbacks = _run_manager.get_child()
     55 output = cast(
     56     Dict[str, Any],
---> 57     self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),
     58 )
     59 return output

File /usr/local/lib/python3.9/site-packages/langchain/chains/llm.py:238, in LLMChain.predict_and_parse(self, callbacks, **kwargs)
    236 result = self.predict(callbacks=callbacks, **kwargs)
    237 if self.prompt.output_parser is not None:
--> 238     return self.prompt.output_parser.parse(result)
    239 else:
    240     return result

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:97, in RouterOutputParser.parse(self, text)
     95     return parsed
     96 except Exception as e:
---> 97     raise OutputParserException(
     98         f"Parsing text\n{text}\n raised following error:\n{e}"
     99     )

OutputParserException: Parsing text
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}
 raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, but got:
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}

To me it looks like ChatGPT does not respect the formatting requirements specified in the prompt, as I haven’t changed anything in the code in the Jupyter notebook. If so, any advice on working around this problem?

1 Like

Getting the same error

Found the issue actually! The ``` are missing at the end of the string, after ‘json’

6 Likes

@ghizmodave @JasminH @joyu-ai @invited2 @minatu2d

The issue is not Solved @ghizmodave - Yes adding the ``` after json does fix the first and second prompts in the multichain prompts from erroring out but the third and final prompt still errors out.

As @JasminH noted there is still a parser error after cell [34] runs,

chain.run("Why does every cell in our body contain DNA?")

File /usr/local/lib/python3.9/site-packages/langchain/output_parsers/structured.py:27,
in parse_json_markdown(text, expected_keys)
       26 if "```json" not in text:
---> 27     raise OutputParserException(
      28         f"Got invalid return object. Expected markdown code snippet with JSON "
      29         f"object, but got:\n{text}"
      30     )
      32 json_string = text.split("```json")[1].strip().strip("```").strip()

Note Line 32 above in the error.

This is a beta course so I believe the goal is to learn, run and test everything so that it works.

Currently that last cell is not working. If this course is later published on Coursera @Coursera_QA_Team there will be issues.

@Robert.Thompson
I got the same error at final prompt.
I changed all “DEFAULT” to “default”.
Now It works.

Thanks @minatu2d
So MULTI_PROMPT_ROUTER_TEMPLATE had some issues.
I can understand the ‘’‘json’‘’ throwing an ambiguous exception, but wouldn’t it be appropriate for langchain to recognize the default case difference?
Is this part of what I’ve heard about LangChain being a bit buggy?
The exception thrown for default should have been coded in perhaps better?

In any case, thanks.

I fixed the issue changing the last line of the template from:

<< OUTPUT (remember to include the ```json)>>

To:

<< OUTPUT (remember to include the ```json before the data)>>

It required a little bit of experimentation…other changes failed.
I believe the issue is because the underlying gpt3.5 model is changing over time, so responses are not longer the same as at the time of recording. Probably versioning the LLM initialization to the specific version used when developing the course may help the stability?

So the complete solution based on the comments above (@ghizmodave and @Robert.Thompson ) for anyone looking is in the cell containing MULTI_PROMPT_ROUTER_TEMPLATE

  1. update last line from << OUTPUT (remember to include the json)>>""" to << OUTPUT (remember to include the json)>>""" i.e. add
  2. update two instances of “DEFAULT” to “default”

I solved it by changing the FORMATTING part of the prompt text as follows

<< FORMATTING >>

Return a markdown code snippet with a JSON object formatted to look like the text between the 3 ticks:


{{{{

"destination": string \ name of the prompt to use or "DEFAULT"

"next_inputs": string \ a potentially modified version of the original input

}}}}

Doesn’t really matter. All short courses are responding with a “Monthly quota exceeded”
I’ve been waiting a couple weeks now; waited until we got into August
Same results

I get the problem solved by modify the template to:

<< OUTPUT (remember to include the json) in first line and in the last line>>“”"