OutputParserException in Chains Notebook Cell [31], Video 11:50

Contrary to the video at 11:50, asking the following question via langchain causes an OutputParserException as shown below. Since this is a LangChain introduction, any clues would be helpful!

Cell [31]: chain.run("What is black body radiation?")

> Entering new MultiPromptChain chain...
---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:80, in RouterOutputParser.parse(self, text)
     79 expected_keys = ["destination", "next_inputs"]
---> 80 parsed = parse_json_markdown(text, expected_keys)
     81 if not isinstance(parsed["destination"], str):

File /usr/local/lib/python3.9/site-packages/langchain/output_parsers/structured.py:27, in parse_json_markdown(text, expected_keys)
     26 if "```json" not in text:
---> 27     raise OutputParserException(
     28         f"Got invalid return object. Expected markdown code snippet with JSON "
     29         f"object, but got:\n{text}"
     30     )
     32 json_string = text.split("```json")[1].strip().strip("```").strip()

OutputParserException: Got invalid return object. Expected markdown code snippet with JSON object, but got:
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}

During handling of the above exception, another exception occurred:

OutputParserException                     Traceback (most recent call last)
Cell In[32], line 1
----> 1 chain.run("What is black body radiation?")

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:236, in Chain.run(self, callbacks, *args, **kwargs)
    234     if len(args) != 1:
    235         raise ValueError("`run` supports only one positional argument.")
--> 236     return self(args[0], callbacks=callbacks)[self.output_keys[0]]
    238 if kwargs and not args:
    239     return self(kwargs, callbacks=callbacks)[self.output_keys[0]]

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:72, in MultiRouteChain._call(self, inputs, run_manager)
     70 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     71 callbacks = _run_manager.get_child()
---> 72 route = self.router_chain.route(inputs, callbacks=callbacks)
     74 _run_manager.on_text(
     75     str(route.destination) + ": " + str(route.next_inputs), verbose=self.verbose
     76 )
     77 if not route.destination:

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:26, in RouterChain.route(self, inputs, callbacks)
     25 def route(self, inputs: Dict[str, Any], callbacks: Callbacks = None) -> Route:
---> 26     result = self(inputs, callbacks=callbacks)
     27     return Route(result["destination"], result["next_inputs"])

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:57, in LLMRouterChain._call(self, inputs, run_manager)
     53 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     54 callbacks = _run_manager.get_child()
     55 output = cast(
     56     Dict[str, Any],
---> 57     self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),
     58 )
     59 return output

File /usr/local/lib/python3.9/site-packages/langchain/chains/llm.py:238, in LLMChain.predict_and_parse(self, callbacks, **kwargs)
    236 result = self.predict(callbacks=callbacks, **kwargs)
    237 if self.prompt.output_parser is not None:
--> 238     return self.prompt.output_parser.parse(result)
    239 else:
    240     return result

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:97, in RouterOutputParser.parse(self, text)
     95     return parsed
     96 except Exception as e:
---> 97     raise OutputParserException(
     98         f"Parsing text\n{text}\n raised following error:\n{e}"
     99     )

OutputParserException: Parsing text
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}
 raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, but got:
{
    "destination": "physics",
    "next_inputs": "What is black body radiation?"
}
2 Likes

I have the same error, tried to change the output but no luck.

2 Likes

too, I have the same error , tried reload and again error

I was having a similar issue so I used colab to run the router chain example from the langchain website and it worked.

2 Likes

I was also getting the same error when I tried to use Router Chain.

hi - I also had difficulty getting the code for this lesson to work.

In the video the presenter never scrolls “right”, so cannot see the code!

Found an almost identical example, that DOES work for me:

1 Like

I got a solution here.

At last line, lack of ``` it should be modified:

<< OUTPUT (remember to include the ```json```)>>"""

But I stil got errors at

chain.run("Why does every cell in our body contain DNA?")
*args, **kwargs)
    234     if len(args) != 1:
    235         raise ValueError("`run` supports only one positional argument.")
--> 236     return self(args[0], callbacks=callbacks)[self.output_keys[0]]
    238 if kwargs and not args:
    239     return self(kwargs, callbacks=callbacks)[self.output_keys[0]]

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:72, in MultiRouteChain._call(self, inputs, run_manager)
     70 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     71 callbacks = _run_manager.get_child()
---> 72 route = self.router_chain.route(inputs, callbacks=callbacks)
     74 _run_manager.on_text(
     75     str(route.destination) + ": " + str(route.next_inputs), verbose=self.verbose
     76 )
     77 if not route.destination:

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/base.py:26, in RouterChain.route(self, inputs, callbacks)
     25 def route(self, inputs: Dict[str, Any], callbacks: Callbacks = None) -> Route:
---> 26     result = self(inputs, callbacks=callbacks)
     27     return Route(result["destination"], result["next_inputs"])

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:140, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)
--> 140     raise e
    141 run_manager.on_chain_end(outputs)
    142 return self.prep_outputs(inputs, outputs, return_only_outputs)

File /usr/local/lib/python3.9/site-packages/langchain/chains/base.py:134, in Chain.__call__(self, inputs, return_only_outputs, callbacks)
    128 run_manager = callback_manager.on_chain_start(
    129     {"name": self.__class__.__name__},
    130     inputs,
    131 )
    132 try:
    133     outputs = (
--> 134         self._call(inputs, run_manager=run_manager)
    135         if new_arg_supported
    136         else self._call(inputs)
    137     )
    138 except (KeyboardInterrupt, Exception) as e:
    139     run_manager.on_chain_error(e)

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:57, in LLMRouterChain._call(self, inputs, run_manager)
     53 _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
     54 callbacks = _run_manager.get_child()
     55 output = cast(
     56     Dict[str, Any],
---> 57     self.llm_chain.predict_and_parse(callbacks=callbacks, **inputs),
     58 )
     59 return output

File /usr/local/lib/python3.9/site-packages/langchain/chains/llm.py:238, in LLMChain.predict_and_parse(self, callbacks, **kwargs)
    236 result = self.predict(callbacks=callbacks, **kwargs)
    237 if self.prompt.output_parser is not None:
--> 238     return self.prompt.output_parser.parse(result)
    239 else:
    240     return result

File /usr/local/lib/python3.9/site-packages/langchain/chains/router/llm_router.py:97, in RouterOutputParser.parse(self, text)
     95     return parsed
     96 except Exception as e:
---> 97     raise OutputParserException(
     98         f"Parsing text\n{text}\n raised following error:\n{e}"
     99     )

OutputParserException: Parsing text
{
    "destination": "biology",
    "next_inputs": "Why does every cell in our body contain DNA?"
}
 raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, but got:
{
    "destination": "biology",
    "next_inputs": "Why does every cell in our body contain DNA?"
}

“biology” was not defined in prompt template
LLM should use origin prompt.
But It tried to categorize to “biology”.

1 Like

I’m getting the same issue. The first 2 prompts in multichain prompt were fixed by changing the provided code and adding ‘’’ to the end of ‘’‘json’‘’ in the OUTPUT. But like you I also get an error,

chain.run("Why does every cell in our body contain DNA?")

File /usr/local/lib/python3.9/site-packages/langchain/output_parsers/structured.py:27,
in parse_json_markdown(text, expected_keys)
       26 if "```json" not in text:
---> 27     raise OutputParserException(
      28         f"Got invalid return object. Expected markdown code snippet with JSON "
      29         f"object, but got:\n{text}"
      30     )
      32 json_string = text.split("```json")[1].strip().strip("```").strip()

Note Line 32 above in the error. I’m not sure that adding the ‘’’ to the json by modifying code is the issue.

Finally, I just make it works.
After changing last line in MULTI_PROMPT_ROUTER_TEMPLATE:

<< OUTPUT (remember to include the ```json)>>"""

to

<< OUTPUT (remember to include the ```json```)>>"""

I changed all “DEFAULT” to “default”.

Now It works.

Hello, minatu2d. How did you reason about changing DEFAULT to default in the MULTI_PROMPT_ROUTER_TEMPLATE? What made you think that this would fix the problem? Many thanks.

1 Like

I got “biology” category in the result even no definition before.
I thinks that LLM tried to classify into no-defined category.
It is seem that LLM did not understand content which set “DEFAULT” prompt.

1 Like

I had this issue too.

How I solved this problem:
Add ``` in order to enclose the json on this below line.

Line: << OUTPUT (remember to include the ```json)>>

Important: Once done with the change, restart the kernel and run the cells again

Thanks for this brother!

what it means adding ``` to the end

I got the same error. I added this instruction at the end of the prompt and it fixed the problem.
Format the output as markdown code snippet with JSON object

I borrowed this idea from here ( Self-querying with Chroma bug - Got invalid return object. Expected markdown code snippet with JSON object, but got ... · Issue #5552 · langchain-ai/langchain · GitHub )

OK. Now this is exactly like alchemy.

I encountered two errors. After I modified two sentences in MULTI_PROMPT_ROUTER_TEMPLATE the errors are gone. :smile:

The first one was chain.run("What is black body radiation?")
The error was
OutputParserException: Parsing text { "destination": "physics", "next_inputs": "What is black body radiation?" } raised following error: Got invalid return object. Expected markdown code snippet with JSON object, but got: { "destination": "physics", "next_inputs": "What is black body radiation?" }

I changed the line starting with << OUTPUT >> into

and it worked.

The second one was chain.run("Why does every cell in our body contain DNA?"). I noticed that the output message is > Entering new MultiPromptChain chain... biology: {'input': 'Why does every cell in our body contain DNA?'},
while in the video it is None: {'input': 'Why does every cell in our body contain DNA?'}.

I guess the model doesn’t “feel” compelled to change the subject into None when it is not one of the candidate subjects. So I replaced

REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below OR it can be "DEFAULT" 

with

REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below, otherwise it must be "DEFAULT"

My intuition is “can be” is somehow weak so I changed it into “must be” and it did work!

However, after I shut down the kernel and restarted it I found that the original prompt did work!

So apparently this stuff has some randomness. If one prompt works at a time, it doesn’t mean it always works and vice versa.

Wow, it was really helpful. I was stuck with a different problem as the LLM output was not properly formatted as JSON. But your prompt solved it. Thanks a ton!

share a solution:
I initially got the error on the biology question due to:

biology: {'input': 'Why does every cell in our body contain DNA?'}

I have to do two things based on people’s suggestion above:

  • use default instead of DEFAULT in MULTI_PROMPT_ROUTER_TEMPLATE, this seems the main fix
  • update the first remember to:
REMEMBER: "destination" MUST be one of the candidate prompt \
names specified below, otherwise it MUST be "default" if the input is not\
well suited for any of the candidate prompts.

after this, it worked several times, but I still got the same error later. Then I restart the kernel, and I don’t have the error anymore.