- Published on
Langchain and Prompts
What are Prompt templates?
LangChain prompt templates are a feature provided by the LangChain library that allow you to create structured and reusable prompts for interacting with large language models (LLMs). A prompt template typically consists of three main components:
- Input Variables: These are the placeholders within the template that will be replaced with actual values when the template is used. They are denoted by curly braces, e.g.,
{product}, {query}.
- Template String: This is the core structure of the prompt, containing the placeholders for the input variables.
- Prompt Template Object: This is the Python object that encapsulates the input variables and template string, allowing you to easily format the prompt and pass it to the LLM.
Why use prompts?
Prompts are essential in LangChain for guiding language models to generate specific outputs based on user-defined instructions. Without prompts, the language model would lack direction and context, leading to unpredictable or irrelevant responses.
Purpose of Prompts:
- Prompts serve as a bridge between human intent and machine-generated responses in language models. They provide context, refine outputs, and modify behaviors based on the instructions given.
- Well-constructed prompts consist of instructions, external information or context, user input or query, and an output indicator to guide the model's response.
Significance of Prompts:
- In the realm of Large Language Models (LLMs), prompts are crucial for instructing the model on what to output in response to specific inputs. Good prompts lead to accurate and meaningful outputs, while bad prompts can result in subpar responses.
Functionality of Prompts1:
- LangChain's PromptTemplates offer a structured, reusable, and dynamic way to interact with language models. They allow for setting context, defining instructions, adjusting content dynamically, and enhancing model responses with few-shot examples.
Consequences of Not Using Prompts2:
- Without prompts, language models lack guidance on how to interpret and respond to inputs effectively. This can lead to erratic or nonsensical outputs as the model operates without a clear directive.
Prompt Template vs Chat Prompt Template
A chat prompt template is a structured format used in chat models to guide the generation of text within a conversational context. Unlike traditional prompt templates that take a single string as input, chat prompt templates are designed for chat-based models that require a list of messages to generate responses.
Key Differences
Input Format:
PromptTemplate
takes a single string as input, which is the prompt.ChatPromptTemplate
takes a list of chat messages as input, where each message can be of different types (system, human, AI, etc.).
Message Formatting:
PromptTemplate
formats the prompt using the provided input variables.ChatPromptTemplate
formats the list of chat messages using the provided input variables.
PromptTemplate
is more suitable for reusable prompts that can be used across different models and applications.ChatPromptTemplate
is designed specifically for chat-based models, where the prompt consists of a sequence of messages.
Message Types:
ChatPromptTemplate
supports different types of chat messages, such asSystemMessage
,HumanMessage
,AIMessage
, and custom message types.PromptTemplate
does not have this specialized message type support.
Placeholder Types:
ChatPromptTemplate
allows the use ofMessagePlaceholder
andMessagesPlaceholder
to represent single messages or a list of messages as placeholders.PromptTemplate
uses a simpler string-based placeholder system.
Also, what's the necessity for PromptTemplate.format() ?
The .format() methods of PromptTemplate objects are used to dynamically fill in the placeholders within the prompt templates with specific values. The .format()
method is crucial for creating dynamic and adaptable prompts in LangChain. It allows you to:
Customize 3: By replacing the placeholders with specific values, you can create prompts tailored to different scenarios or user inputs.
Improve Reusability: Prompt templates with placeholders can be reused across multiple applications or models, reducing the need to write new prompts from scratch.
Enhance Flexibility: The ability to dynamically format prompts enables you to build more flexible and adaptable language model-powered applications.
Maintain Consistency: By using a consistent prompt template, you can ensure that the language model's responses adhere to a specific format or structure, making the output more predictable and usable.
Examples for Langchain Prompts 👨🏽💻
Requirements & Setup:
Python > 3.8.1
Create and activate a virtual environment
virtualenv myenv
(OR)
python -m venv myenv
myenv\Scripts\activate
Code
from dotenv import load_dotenv
import os
from langchain_community.llms import OpenAI
from langchain import PromptTemplate
from langchain.chains import LLMChain
from langchain.chains import SequentialChain, SimpleSequentialChain
from langchain.memory import ConversationBufferMemory
import streamlit as st
load_dotenv()
# Read the value of OPENAI_API_KEY from the .env file
openai_api_key = os.getenv('OPENAI_API_KEY')
#Set the title for the page
st.title('An example of FewShotPrompt in a Langchain Application')
#Set an input box
input_text = st.text_input('Enter a financial concept...')
#Example Prompt - 1
demo_template='''I want you to assume the role of an acting financial advisor and accountant for people.
In an easy way, explain the basics of {financial_concept}.'''
prompt=PromptTemplate(
input_variables=['financial_concept'],
template=demo_template
)
prompt.format(financial_concept='income tax') #Assigning a default value here
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
chain1 = LLMChain(llm=llm, prompt=prompt)
if input_text:
st.write(chain1({'financial_concept':input_text}))
Output
Figure 2: Output of a simple Langchain app leveraging PromptTemplate
Code
from dotenv import load_dotenv
import os
from langchain_community.llms import OpenAI
from langchain import PromptTemplate
from langchain.prompts import FewShotPromptTemplate
from langchain.chains import LLMChain
from langchain.chains import SequentialChain, SimpleSequentialChain
from langchain.memory import ConversationBufferMemory
import streamlit as st
load_dotenv()
# Read the value of OPENAI_API_KEY from the .env file
openai_api_key = os.getenv('OPENAI_API_KEY')
#Set the title for the page
st.title('An example of FewShotPrompt in a Langchain Application')
#Set an input box
input_text = st.text_input('Ask a question...')
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
# Example 2 - FewShotPromptTemplate
template = "Question: {Question}\n Answer: {Answer}"
example_prompt = PromptTemplate(
input_variables=["Question", "Answer"],
template=template,
)
examples = [{"Question":"What is the Capital of Thailand?","Answer":"Bangkok"},{"Question":"Where's the deepest part of the ocean located?", "Answer":"Mariana Trench"}]
few_shot_prompt = FewShotPromptTemplate(examples=examples, suffix="Question:{Question}\n", prefix="Answer any question that's addressed to you.", input_variables=['Question'], example_prompt=example_prompt,example_separator="\n")
few_shot_chain = LLMChain(llm=llm, prompt=few_shot_prompt)
if input_text:
st.write(few_shot_chain({'Question':input_text}))