Chaining together multi-step prompts

I found that asking ChatGPT a very complex question is more effective when I break down the prompt into multiple smaller steps, rather than giving it one large complicated prompt.

Is the best way to do this w/ LangChain by using ConversationBufferMemory and ConversationalRetrievalChain?

In Harrison’s tutorial about using a memory buffer (, the use case is a back-and-forth between human and assistant.

In my use case, I want the user to click a single prompt and get a single response. But under the hood, there is back-and-forth between system and assistant using ConversationalRetrievalChain

I’m curious to know if I’m on the right track. Thanks

Hi @langchainfan !

When ChatGPT first came out, I started playing with how to automate the Software Engineering Process and created this script using SequentialChains as a way to break the process of software engineering into multiple steps. This chain doesn’t repeat steps based on any of the generated output, but an Agent could be created to do that.

# Script to "Automate" The Software Engineering Process
# Give it a prompt of what you want to build and it will try to build it.
from langchain import OpenAI, PromptTemplate, LLMChain

# The Prompts for some different Steps to Software Engineering
software_architect_template = """
I want you to act as a software architect, you create diagrams as code using {architecture_framework}.
Write code that diagrams this software. {software_description}

software_developer_template = """
I want you to act as a software developer, you develop code in {programming_language}.
Write code that fits this architecture code. {architecture_code}

ui_developer_template = """
I want you to act as a ui developer, you develop code in {frontend_framework}.
Write the code for the frontend that fits this software spec. {software_description}

# Need to define backend and frontend templates because when using chains there cannot be duplicate keys for output
# See chain structure for details
backend_sdet_template = """
I want you to act as a software development engineer in test, you critique and test code in {backend_test_framework}.
Develop test code for this piece of software. 

backend_technical_writer_template = """
I want you to act as a technical writer. You create technical documents in markdown about code.
Create documentation about this code.

frontend_sdet_template = """
I want you to act as a software development engineer in test, you critique and test code in {frontend_test_framework}.
Develop test code for this piece of software. 

frontend_technical_writer_template = """
I want you to act as a technical writer. You create technical documents in markdown about code.
Create documentation about this code.

# The Prompt Templates for the Prompts
software_architect_prompt = PromptTemplate(
    input_variables=["architecture_framework", "software_description"],

software_developer_prompt = PromptTemplate(
    input_variables=["programming_language", "architecture_code"],

ui_developer_prompt = PromptTemplate(
    input_variables=["frontend_framework", "software_description"],

backend_sdet_prompt = PromptTemplate(
    input_variables=["backend_test_framework", "backend_code"],

backend_technical_writer_prompt = PromptTemplate(

frontend_sdet_prompt = PromptTemplate(
    input_variables=["frontend_test_framework", "frontend_code"],

frontend_technical_writer_prompt = PromptTemplate(

# Creating the Chains
# Add the prompt templates to the appropriate chains
from langchain.chains import SequentialChain

# Base Chains
software_architect_chain = LLMChain(llm=gpt, prompt=software_architect_prompt, output_key="architecture_code")
software_developer_chain = LLMChain(llm=gpt, prompt=software_developer_prompt, output_key="backend_code")
ui_developer_chain = LLMChain(llm=gpt, prompt=ui_developer_prompt, output_key="frontend_code")
backend_sdet_chain = LLMChain(llm=gpt, prompt=backend_sdet_prompt, output_key="backend_test_code")
frontend_sdet_chain = LLMChain(llm=gpt, prompt=frontend_sdet_prompt, output_key="frontend_test_code")
backend_technical_writer_chain = LLMChain(llm=gpt, prompt=backend_technical_writer_prompt, output_key="backend_documentation")
frontend_technical_writer_chain = LLMChain(llm=gpt, prompt=frontend_technical_writer_prompt, output_key="frontend_documentation")

# Composite Chains
backend_documentation_chain = SequentialChain(chains=[backend_sdet_chain, backend_technical_writer_chain], input_variables=["backend_code", "backend_test_framework"], output_variables=["backend_test_code", "backend_documentation"])
frontend_documentation_chain = SequentialChain(chains=[frontend_sdet_chain, frontend_technical_writer_chain], input_variables=["frontend_code", "frontend_test_framework"], output_variables=["frontend_test_code", "frontend_documentation"])
backend_chain = SequentialChain(chains=[software_developer_chain, backend_documentation_chain], input_variables=["programming_language", "backend_test_framework", "architecture_code"], output_variables=["backend_code", "backend_test_code", "backend_documentation"])
frontend_chain = SequentialChain(chains=[ui_developer_chain, frontend_documentation_chain], input_variables=["frontend_framework", "frontend_test_framework", "software_description"], output_variables=["frontend_code", "frontend_test_code", "frontend_documentation"])

# Total Chain
software_chain = SequentialChain(chains=[software_architect_chain, backend_chain, frontend_chain], input_variables=["architecture_framework", "programming_language", "frontend_framework", "frontend_test_framework", "backend_test_framework", "software_description"], output_variables=["architecture_code", "backend_code", "backend_test_code", "frontend_code", "frontend_test_code", "backend_documentation", "frontend_documentation"])

# Call Chains
artifacts = software_chain({"architecture_framework":"mermaid", "programming_language":"flutter", "frontend_framework":"flutter", "frontend_test_framework": "test.dart", "backend_test_framework":"test.dart", "software_description":"A website that prompts the user to enter text into a text field. When the generate button is pressed it sends the text to a server for processing. The website checks for grammar and spelling."})

# Print the results of the Sequential Chain

Oh OK, a simple Sequential chain: Sequential | 🦜️🔗 Langchain

That’s a lot simpler than what I was thinking! Thank you @SamReiswig :pray: