In Lesson 3, when running the
“chain.run(“Why does every cell in our body contain DNA?”)”
command I got the following error:
“ValueError: Received invalid destination chain name ‘biology’”
I though it would be directed to the default chain but it didn’t happen.
What I’ve tried:
I changed ‘DEFAULT’ to ‘default’ in MULTI_PROMPT_ROUTER_TEMPLATE, two occurrences
I added space after ‘if the input is not’
I added “" after "json” in MULTI_PROMPT_ROUTER_TEMPLATE
I modified this part in MULTI_PROMPT_ROUTER_TEMPLATE and it worked.
REMEMBER: “destination” MUST be one of the candidate prompts
specified below. If the input is not
well suited for any of the candidate prompts, it should be “default”.
I know it’s been a while since this question has been asked but if anyone is still having this issue even after trying both the solution in the question and in the answer (like me) what helped in my case was changing the following in MULTI_PROMPT_ROUTER_TEMPLATE:
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
“destination”: string \ name of the prompt to use or “default”
“next_inputs”: string \ a potentially modified version of the original input
}}}}
```
REMEMBER: “destination” MUST be one of the candidate prompt
names specified in the CANDIDATE PROMPTS section. If it is
any other word than a candidate prompt return “default”.
REMEMBER: “next_inputs” can just be the original input
if you don’t think any modifications are needed.
The first one, is to change the model to the smarter one, I used gpt-4-turbo, and it works, I also tried cmd-r from cohere with no success.
The second one, is to use silient_errors=True flag in RouterChain constructor, it simply calls default chain if incorrect destination was selected by LLM (check file /chains/router/base.py:59)
I had the issue and changing the prompt did not work. The only thing that worked was changing to the gpt-4-turbo model.
Maybe a line/note should be added that recommends using gpt-4.
I got the same error. I think the model didn’t get the right response on the first step.
I tried the original question, it threw an error.
I tried a similar question: “What is a DNA?”, it ran correctly.
For the original question, somehow the model responded “biology” in the first chain and killed the process. For the second question, the model responded “default” and it worked.
I created VerifiedMultiPromptChain class that extends the MultiPromptChain to add a simple verification step before routing inputs to destination chains.
class VerifiedMultiPromptChain(MultiPromptChain):
def _call(self, inputs):
route = self.router_chain(inputs)
destination = route['destination']
next_inputs = route['next_inputs']
# Simple verification step
if destination not in self.destination_chains:
print(f"Invalid destination: {destination}. Routing to DEFAULT.")
destination = "DEFAULT"
if destination == "DEFAULT":
return self.default_chain(next_inputs, callbacks=self.callbacks)
else:
return self.destination_chains[destination](next_inputs, callbacks=self.callbacks)
Doesn’t seem to work when trying out today unfortunately, but what worked is to put all the requirement togehter
MULTI_PROMPT_ROUTER_TEMPLATE = “”"Given a raw text input to a
language model select the model prompt best suited for the input.
You will be given the names of the available prompts and a
description of what the prompt is best suited for.
“destination” MUST be one of the candidate prompt
names specified below OR it can be “DEFAULT” if the input is not
well suited for any of the candidate prompts.
You may also revise the original input if you think that revising
it will ultimately lead to a better response from the language model.
“next_inputs” can just be the original input
if you don’t think any modifications are needed.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \ "DEFAULT" or name of the prompt to use in {destinations}
"next_inputs": string \ a potentially modified version of the original input
}}}}
REMEMBER:
“destination” -if the input is not well-suited for any of the candidate prompts, set it as “DEFAULT”.
then it works, output:
Entering new MultiPromptChain chain…
None: {‘input’: ‘Why does every cell in our body contain DNA?’}
Finished chain.