For the SimpleSequentialChain example, the first part is:
llm = ChatOpenAI(temperature=0.9)
# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
# chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)
The second part is:
# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following \
company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Where is the result of the first prompt being assigned to the variable “{company_name}”?
1 Like
I have the same question; I have tested the 3rd prompt like this, without assigning {company_description}
:
third_prompt = ChatPromptTemplate.from_template(
"Write a 10 words slogan for the description:{company_description}"
)
chain_three = LLMChain(llm=llm, prompt=third_prompt)
And the result is:
Royal Bedding Co.
Royal Bedding Co. is a luxurious bedding company that provides high-quality bedding products for an exceptional sleeping experience.
“Indulge in luxury: Sleep better with Royal Bedding Co.”
My thought is that LLMs were already pre-trained with a quantity of codes, so they’ve learnt that we were in a “chain” and knew what we wanted to achieve. Based on this context, AI can do the following without us assigning anything.
Correct me I’m wrong. @gent.spah @ai_curious
BTW this post should be replaced under LangChain for LLM Application Development
What do you mean, I am not understanding it fully?
Ah huh, this is the background:
It’s in lesson 4 of LangChain for LLM Application Development. The SimpleSequentialChain code demo is as follows:
from langchain.chains import SimpleSequentialChain
llm = ChatOpenAI(temperature=0.9)
# prompt template 1
first_prompt = ChatPromptTemplate.from_template(
"What is the best name to describe \
a company that makes {product}?"
)
# Chain 1
chain_one = LLMChain(llm=llm, prompt=first_prompt)
# prompt template 2
second_prompt = ChatPromptTemplate.from_template(
"Write a 20 words description for the following \
company:{company_name}"
)
# chain 2
chain_two = LLMChain(llm=llm, prompt=second_prompt)
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
overall_simple_chain.run(product)
The {product}
in first_prompt
is from overall_simple_chain.run(product)
.
But how about {company_name}
in second_prompt
? It’s not assigned anywhere.
Yeah its counterintuitive but what I think is happening here is you are creating chains linked together chain_one = LLMChain(llm=llm, prompt=first_prompt) is linked with
chain_two = LLMChain(llm=llm, prompt=second_prompt)
and there is a memory of prompts here so there is no need to go through all the steps manually.
I agree with you; they’re already chained so the input of the next chain is the output of the previous chain by default
I think its because of LangChain here rather than the LLM itself.
I’m still looking for the source code to see the exact mechanism, but I can’t believe it is a coincidence that the first prompt in the chain asks the LLM to make up a name of a company, and the second prompt in the chain uses {company_name}. There are some hints about aligning output_variables
and input_variables
here: Sequential | 🦜️🔗 Langchain
SimpleSequentialChain
: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
my emphasis added
if you print out the overall_simple_chain
object before calling run()
you can see
output_key='text'
from the first prompt, immediately followed by
PromptTemplate(input_variables=['company_name']
for the second prompt. I assume that in the source code of SimpleSequentialChain
we could see the actual mapping/assignment happening.
EDIT:
yeah, it ain’t rocket science…
https://github.com/hwchase17/langchain/blob/master/langchain/chains/sequential.py