As a Python programmer, you might be looking to incorporate large language models (LLMs) into your projects – anything from text generators to trading algorithms. That’s where this comprehensive LangChain Python guide comes in, tailored to fit both novices and seasoned coders.
Unlike dense official documents or confusing tutorials, I bring a simplified approach to this tutorial, drawing from years of experience in making complex concepts more digestible. With easy-to-follow instructions and lucid examples, I’ll guide you through the intricate world of LangChain, unlocking its immense potential.
Don’t delay; start leveraging LangChain to build innovative applications today.
What Is LangChain?
LangChain is a software development framework that makes it easier to create applications using large language models (LLMs). It’s an open-source tool with a Python and JavaScript codebase. LangChain allows developers to combine LLMs like GPT-4 with external data, opening up possibilities for various applications such as chatbots, code understanding, summarization, and more.
LangChain Pros and Cons
Pros:
- Freely available
- Open source
- Supports all major LLMs
- Variety of modules to perform common tasks
Cons:
- Limited support for languages other than Python
- Some people express security concerns over the handling of sensitive information.
LangChain Pricing
LangChain framework is a free-to-use open-source framework. To use some of the LangChain-supported large language models, you must subscribe to the developers of these models and obtain API keys.
Getting Started with LangChain
Getting started with the LangChain framework is straightforward. You can download the LangChain Python package, import one or more of the LangChain modules, and start building Python applications using large language models.
Installation
You can install the LangChain package via the following pip command.
!pip install langchain
Alternatively, if you are using the Anaconda distribution of Python, you can install LangChain via the following conda command.
!conda install langchain -c conda-forge
A Simple Example
LangChain simplifies the use of large language models by offering modules that cover different functions. Later on, I’ll provide detailed explanations of each module. In this section, let’s call a large language model for text generation. The general principle for calling different modules remains consistent throughout.
You must perform the following steps to call a Large Language Model (LLM) via LangChain in Python.
Step 1: Install Python Package
For instance, for OpenAI LLMs, you need to install the OpenAI library.
!pip install openai
Step 2: Get API Key
Get an API key for the corresponding large language module. You can access some LLMs without an API key. The following script imports OpenAI API key from the environment variables.
import os
api_key = os.getenv('OpenAI_KEY')
Step 3: Import LLM Model
Import an LLM model from LangChain. For example, we import the OpenAI model in the following script. Sometimes, you need to initialize an LLM model using its API key.
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key = api_key, temperature=0.9)
Step 4: Pass Prompt
Pass the prompt to the LLM object you just created.
prompt = "Suggest me a good name for an ice cream parlour that is located on a beach!"
print(llm(prompt))
And that’s it. You can see how simple it is to call an LLM with LangChain. You can replace OpenAI with other LangChain-supported large language models.
Let’s now see what other functionalities you can achieve with LangChain modules.
LangChain Modules
LangChain groups its functionalities into modules enlisted below:
- Models
- Prompts
- Chains
- Agents
- Memory
- Document Loaders and Indexes
Models
LangChain supports three types of models:
- Large Language Models
- Chat Models
- Text Embedding Models
Large Language Models
Large language models (LLMs) are the simplest of LangChain models and are the backbone of many other LangChain modules. An LLM takes user inputs as texts and outputs a response as text.
LangChain supports a large number of LLMs. We will see a couple of examples in this section.
OpenAI Example
I have already explained in the basic example section how to use OpenAI LLM. If you want to learn more about directly accessing OpenAI functionalities, check out our OpenAI Python Tutorial.
Let’s dig a little further into using OpenAI in LangChain.
You can pass an OpenAI model name to the OpenAI model from the langchain.llms module. In the following example, we pass the text-davinci-003 model, which is also the default model.
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key = api_key,
model_name="text-davinci-003")
print(llm("Can you tell me a riddle about water?"))
You can pass multiple text prompts to an OpenAI model via the generate() method. For example, the following script returns two outputs, one for each prompt.
llm_result = llm.generate(["Write a poem about hills", "Tell me a riddle about oranges"])
len(llm_result.generations)
You can access the first output using the generations[0][0].text code.
print(llm_result.generations[0][0].text)
Likewise, generations[1][0].text returns the second output.
print(llm_result.generations[1][0].text)
C Transformers Example
C transformers package implements various LLMs that you can use in LangChain. You do not need an API key to access C transformers LLMs.
To use the C transformers language models in LangChain, you must import the CTransformers model and pass the model name to the model attribute. You can then generate text using CTransformers LLM.
from langchain.llms import CTransformers
llm = CTransformers(model='marella/gpt-2-ggml')
print(llm("I am flying to Lisbon on"))
Chat Models
LangChain chat models use LLMs at the backend. However, they allow conversational interaction with users.
Generating a Single Response
To create a chat model, import one of the LangChain-supported chat models, from the langchain.chat_models module. In the following example, we import the ChatOpenAI model, which uses OpenAI LLM at the backend.
You also need to import HumanMessage and SystemMessage objects from the langchain.schema module. The former allows you to specify human/user input, while the latter helps you define the system’s/chat bots role.
Next, you must create an object of the ChatOpenAI model and pass it your API key. I set the temperature to 0 in the following script since I want the best response. Increasing the temperature will allow the model to generate more creative responses.
from langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage, SystemMessage)
chat = ChatOpenAI(openai_api_key = api_key,
temperature=0)
To create a human or a system message, you need to pass the message text to the content attribute of the HumanMesssage and SystemMessage objects, respectively. You must pass messages in a list to the chat model. It is not mandatory to pass system messages.
We pass a single human message to the OpenAI chat model in the following script.
human_message = "Translate from English to Frech: I love playing Tennis"
chat([HumanMessage(content = human_message)])
Though you can pass anything to the system messages, you will primarily use it to define the system role. Here’s an example.
messages = [
SystemMessage(content="You are a football historian."),
HumanMessage(content=" Who won the player of the tournament in the 11th Fifa World Cup?")
]
chat(messages)
Generating Batch Responses
You can generate batch responses from chat models. To do so, you must pass a list containing human and system messages.
batch_messages = [
[
SystemMessage(content="You are a football historian."),
HumanMessage(content=" Who won the player of the tournament in the 11th Fifa World Cup?")
],
[
SystemMessage(content="You are a Pizza chef."),
HumanMessage(content="Give me the 7-step recipe to prepare a pizza.")
],
]
result = chat.generate(batch_messages)
As with language models, you can access multiple responses using generations list. For instance, you can access the first response via the following script.
print(result.generations[0][0].text)
The following script prints the second response.
print(result.generations[1][0].text)
Text Embedding Models
The LangChain text embedding models return numeric representations of text inputs that you can use to train statistical algorithms such as machine learning models.
You have to import an embedding model from the langchain.embeddings module and pass the input text to the embed_query() method. The following script uses the OpenAIEmbeddings model to generate text embeddings. The output will be a 1536-dimensional vector.
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(openai_api_key = api_key)
text = "Paris the capital of france and is famous for its wines and perfumes"
embeddings_result = embeddings.embed_query(text)
print(len(embeddings_result))
You can pass multiple text inputs in a list to embed_documents() method, which returns embeddings for all the text inputs. For example, the following script returns embeddings for three text inputs.
text = ["Paris the capital of france and is famous for its wines and perfumes",
"London is the Capital of England",
"Tokyo is the Capital of Japan"
]
embeddings_result = embeddings.embed_documents(text)
print("Total Embeddings:", len(embeddings_result))
In the next section, I will show you how to replace your hard-coded input templates with something more generic using the LangChain prompts module.
Prompts
Prompts refer to model inputs. You hardcoded your prompts to LLMs and chat models in the previous sections. This technique is unsuitable as you will not receive hardcoded complete text prompts in production environments. Instead, you will receive concise inputs from users that you will want to convert to prompts.
The LangChain prompts modules let you construct your input prompts in different formats.
This section will explain how to format prompts for LLMs and chat models.
LLM Prompts
To create a prompt, import the PromptTemplate object from the langchain.prompts module.
Next, you need to define a template for your prompt. A prompt template consists of a text string that takes input parameters from the end user and generates a prompt. Inside templates, curly braces contain parameter values.
You must pass the template to the template parameter of the PromptTemplate object. You also need to pass the list of parameter names from your template to the input_variables attribute.
You can create your final prompt by calling the PromptTemlate’s format() method and passing the value for the input variable.
from langchain.prompts import PromptTemplate
template = "Can you tell me a riddle about {object} with its answer?"
prompt = PromptTemplate(
template = template,
input_variables=["object"]
)
prompt = prompt.format(object="ice")
print(prompt)
You can use the prompt as input to an LLM model.
llm = OpenAI(openai_api_key = api_key)
print(llm(prompt))
Chat Prompts
To create chat prompts, import ChatPromptTemplate, SystemMessagePromptTemplate, and HumanMessageChatTemplate objects.
To create the system chat prompt templates, pass text strings containing input parameters inside the braces to the SystemMessagePromptTemplate object. Likewise, you must pass text strings to the HumanMessageChatTemplates to create human chat prompts.
Once you create the system and human prompt templates, you must pass them in a list to the from_messages() attribute of the ChatPrompTemplate object, which combines human and system templates.
Next, to generate the final template, you must call the format_prompt() method and pass parameter values for the system and human prompt templates.
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
system_template = "You are a {sports} historian."
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chat_prompt = chat_prompt.format_prompt(sports="Tennis",
text="Who won the Australian Open in 2015").to_messages()
print(chat_prompt)
As the following script shows, you can use the generated template as input to any chat model.
chat(chat_prompt)
In the next section, you will see how to execute multiple LangChain modules together using chains.
Chains
Chains allow you to run multiple LangChain modules in conjunction. For example, using a chain, you can run a prompt and an LLM together, saving you from first formatting a prompt for an LLM model and executing it using the model in separate steps.
LangChain supports three main types of chains:
- Simple LLM Chain
- Sequential Chain
- Custom Chain
Simple LLM Chain
A simple LLM chain receives user input as a prompt and generates an output using an LLM.
To use a simple LLM chain, import LLMChain object from the langchain.chains module. Next, you must pass your input prompt and the LLM model to the prompt and llm attributes of the LLMChain object. The LLMChain’s run() method executes the chain. Here’s an example.
from langchain.chains import LLMChainllm = OpenAI(openai_api_key = api_key, temperature=0.9)
prompt = PromptTemplate(
input_variables=["object", "location"],
template="Suggest me a good name for {object} shop, located on {location}",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({
'object': "clothes",
'location': "beach"
}))
You can also use simple LLM chains to pass prompts to chat models. To do so, you must pass your chat prompts and chat model to the prompt and llm attributes of the LLMChain.
system_template = "You are a {sports} historian."
system_message_prompt = SystemMessagePromptTemplate.from_template(system_template)
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt_template = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chat = ChatOpenAI(openai_api_key = api_key,
temperature=0)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run({
'sports': "Tennis",
'text': "Who won Austrialian Open in 2015?"
}))
Sequential Chain
A sequential chain allows you to execute multiple chains in a sequence. The SimpleSequentialChain object from the langchain.chains module enables you to create a sequential chain. You need to pass the chains you want to execute in a sequence in a list to the chains attribute of the SimpleSequentialChain object. Like a simple chain, the run() method allows you to execute a sequential chain.
from langchain.chains import SimpleSequentialChain
prompt1 = PromptTemplate(
input_variables=["location"],
template="Suggest me a good name for a clothing shop, located on {location}",
)
chain1 = LLMChain(llm=llm, prompt=prompt1)
prompt2 = PromptTemplate(
input_variables=["location"],
template="Write a catchy tag line for a clothing shop, located on {location}",
)
chain2 = LLMChain(llm=llm, prompt=prompt2)
overall_chain = SimpleSequentialChain(chains=[chain1, chain2], verbose=True)
# Run the chain specifying only the input variable for the first chain.
print(overall_chain.run("beach"))
Custom Chain
LangChain provides many chains out of the box. In addition, you can create custom chains with LangChain. To do so, you must follow these steps:
- Create a class that inherits the Chain class from the langchain.chains.base module.
- Define input_keys and output_keys properties. The input_keys property stores the input to the custom chain, while the output_keys stores the output of your custom chain.
- Add the _call() method, which executes when you call the run() method on your custom chain object.
Here is a script from the official LangChain documentation that defines a custom chain that takes two simple LLM chains as input and concatenates the output of the two input chains. The script returns the concatenation as the output of the custom chain.
from typing import Dict, List
from langchain.chains.base import Chain
class ConcatenateChain(Chain):
chain_1: LLMChain
chain_2: LLMChain
@property
def input_keys(self) -> List[str]:
# Union of the input keys of the two chains.
all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
return list(all_input_vars)
@property
def output_keys(self) -> List[str]:
return ['concat_output']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
output_1 = self.chain_1.run(inputs)
output_2 = self.chain_2.run(inputs)
return {'concat_output': output_1 + output_2}
You can run the above custom chain exactly like a sequential chain.
overall_chain = ConcatenateChain(chain_1=chain1, chain_2=chain2)
print(overall_chain.run("beach"))
The following section explains how you can use the agents module to make LLMs access external sources and return results based on a thought process.
Agents
LangChain agents involve an LLM to perform the following steps:
- Decide which action to perform, based on the user input or its previous outputs.
- Perform the action.
- Observe the output.
- Repeat the first three steps until it completes the task defined in the user input to the best of its abilities.
Agents make use of external tools to perform specific actions. LangChain provides many out-of-the-box agent tools. Tools allow LLMs to access various information sources such as Google, Wikipedia, YouTube, Python REPL Databases, etc., allowing you to solve complex problems involving accessing external resources.
Let’s see an example where we will create an agent that accesses Arxiv, a famous portal for pre-publishing research papers. We will request the agent to return some information about a research paper.
To create an agent that accesses tools, import the load_tools, initialize_agent methods, and AgentType object from the langchain.agents module.
Pass the tool you want an agent to access in a list to the load_tools() method. Next, initialize an agent by passing the tool, LLM, and agent type to the initialize_agent() method. We set verbose = True to view the decision-making process of the agent.
from langchain.agents import load_tools, initialize_agent, AgentType
llm = ChatOpenAI(openai_api_key = api_key,
temperature=0.0)
tools = load_tools(
["arxiv"],
)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
The initiaize_agent() method returns an object you can execute like a chain via the run() method.
In the following example, we ask the agent to return information about a specific paper from Arxiv.
agent_chain.run(
"What's the paper 2303.15056 about?",
)
In the output, you can see the action and the agent’s steps to return the output. It first decides to take action to access Arxic to search the paper. It then searches the paper using the input. It receives the observation and formulates the final output.
LangChain provides many agents out of the box to perform some of the common tasks. Let’s see an example of an agent that can access Pandas DataFrame.
For example, we will request the agent to return information from this Pandas DataFrame.
import pandas as pd
df = pd.read_csv(r'D:Datasetstitanic_data.csv')
df.head()
To create an agent for Pandas DataFrame, import the create_pandas_dataframe_agent object from the langchain.agents module.
You can then directly request the agent via the run() method. For instance, in the following script, we ask the agent to return the mean value from the fare column.
from langchain.agents import create_pandas_dataframe_agent
agent = create_pandas_dataframe_agent(llm, df, verbose=True)
agent.run("What is the average Fare?")
Similarly, we can execute more complex queries, and the agent will return the response and the reasoning process involved in response generation.
agent.run("Count the male pessengers with age greater than 50.")
In the next section, you will see how to use LangChain memory agents to track previous user interactions.
Memory
By default, chains and agents treat each incoming query independently and do not keep a record of the previous user interactions. In some applications, for example, in chatbots, it is essential to keep track of previous user interactions.
The two most common methods to execute interaction involving memory are chains that store memory at run time and chains that use saved memories. Let’s see both these methods.
Conversation Chains
Conversations chains save all user interactions in memory and generate future responses based on the user’s interactions.
The ConversationChain object allows you to create conversation chains. Here is an example: I first say hi to the conversation chain, which generates a response.
from langchain.chains import ConversationChain
conversation = ConversationChain(
llm=llm,
verbose = True
)
conversation.predict(input="Hi there.")
Next, I ask it another question. You can see previous interactions in the output of the following script in the “Current conversation” section.
conversation.predict(input="I have a question about Pizza")
Similarly, when I ask another question, it keeps track of all the previous interactions in the “Current conversation” section and generates a response based on the earlier interactions.
conversation.predict(input="Can I make it without an Oven?")
Using Saved Memory
You can save your interactions with an LLM, chain, or agent in a ConversationBufferMemory object. You can add user messages via the add_user_message() method and AI/system messages using the add_ai_message() method.
You can use the load_memory_variables() method to see all the messages in the memory.
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory()
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("whats up?")
memory.chat_memory.add_user_message("I want to know something about Pizza")
memory.chat_memory.add_ai_message("Sure, what do you want to know?")
memory.load_memory_variables({})
You can pass the ConversationBufferMemory object to the memory attribute of the ConversationChain object. This allows conversation chains to generate responses based on the interaction in the memory object.
from langchain.chains import ConversationChain
llm = ChatOpenAI(openai_api_key = api_key,
temperature=0.0)
conversation = ConversationChain(
llm=llm,
memory=memory,
verbose = True
)
conversation.predict(input="Can I make it in Oven?")
In the next section, I will explain how you can import text documents from various sources and use LLM models to analyze and extract information from the documents.
Document Loaders and Indexes
The LangChain document loader modules allow you to import documents from various sources such as PDF, Word, JSON, Email, Facebook Chat, etc.
The following script demonstrates how to import a PDF document using the PyPDFLoader object from the langchain.document module.
# !pip install pypdf
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader(r"D:Datasets207416.pdf")
documents = loader.load()
Once you import a document, you can use the indexes to analyze and extract information from a document. To do so, you need to perform the following steps:
- Split the document into smaller chunks using one of the LangChain text splitters.
- Use one of the LangChain text embedding models to get the numerical representation of the document text.
- Create one of the Langchain vectorstores objects. A vector store is a vector database that stores and index vector embeddings.
Let’s see an example where we will extract information from a PDF document containing condensed interim financial information of a company.
We have already imported the PDF document in a previous script. The following script splits the data into chunks.
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
The script below uses a Chroma vector object and calls the from_documents() method to create a vector database. Notice that we pass text chunks and OpenAI embedding objects created in a previous section to the from_documents() method.
from langchain.vectorstores import Chroma
db = Chroma.from_documents(texts, embeddings)retriever = db.as_retriever()
The as_retreiver() method returns a retriever object for the PDF document.
Depending on the type of information you want to extract, you can create a chain object and the retriever object from the vector database.
For instance, for questions/answers about a document, you can use the RetrievalQA chain from the langchain.chains module.
from langchain.chains import RetrievalQA
qa = RetrievalQA.from_chain_type(llm, chain_type="stuff", retriever=retriever)
We will ask a few questions about the PDF document we just imported. Here is what a condensed interim statement of financial position looks like.
Let’s ask the model to give us the total value of non-current assets. The output shows that the model successfully retrieves this information.
query = "Give me the value of total non current assets"
qa.run(query)
Let’s get the value of long-term loans and advances.
query = "Give me the value of Long-term loans and advances"
qa.run(query)
You can ask further complex questions from the model and it will try to look for the different parts in the document to generate a response.
For example, you can ask the model to calculate various financial ratios from the document and explain how it calculated these values.
query = """Which financial ratios can you calculate from the document?
Give values with an explanation for the rations that you can calculate."""
print(qa.run(query))
LangChain modules provide a powerful and efficient way to perform various LLM tasks. In the following sections, I will enlist some alternatives to LangChain and answer some of the most frequently asked questions about LangChain.
LangChain Alternatives
Following are some frameworks providing functionalities similar to LangChain:
Frequently Asked Questions
How Popular Is LangChain?
As of 3rd June 2023, LangChain has 516,737 weekly downloads and falls in the category of influential projects. In addition, despite being a very young framework, LangChain has received 44,500 Github stars which testify to the framework’s popularity.
Who Is Behind Langchain?
Harrison Chase created LangChain in October 2022 while working at the machine learning startup Robust Intelligence.
What Models Are Supported by Langchain?
LangChain supports almost all the major large language models and categorizes them into modules. Check the official documentation for a complete list of supported models.
Is Langchain Free?
Yes, LangChain is free and open source. However, you might have to directly pay the developers of the various large language models that LangChain uses.
What Is the Difference Between Pinecone and Langchain?
LangChain is a library that offers tools for working with language models, while Pinecone is a vector database that allows developers to construct scalable, real-time recommendations and search systems based on vector similarity search. Although both tools can be utilized for working with language models, they possess distinct features and serve distinct purposes.
How Do You Use Langchain with Pinecone?
LangChain offers tools for computing vector embeddings, such as OpenAIEmbeddings and Chroma. These embeddings can then be stored in Pinecone using its Python client. With the embeddings stored in Pinecone. The combination of LangChain and Pinecone can help develop applications that take advantage of large language models, with LangChain handling user input processing and Pinecone managing deployment and scaling.
The Bottom Line
This LangChain Python Tutorial simplifies the integration of powerful language models into Python applications. Following this step-by-step guide and exploring the various LangChain modules will give you valuable insights into generating texts, executing conversations, accessing external resources for more informed answers, and analyzing and extracting information from documents. With this knowledge, you can effortlessly enhance your Python projects utilizing large language models. Don’t hesitate any longer – unlock the potential of LangChain and take your Python LLM projects to the next level.