Chatbot with LangChain and Streamlit: Creating a conversational interface
Part 6 (final) of the series "Everything You Need to Know About LangChain (Before Starting an AI Project)"
We have reached the final post of our series on LangChain!
Today we will bring our applications to life by creating an interactive chatbot using LangChain and Streamlit. If you have followed the previous posts, you already know the power of LangChain for working with language models (LLMs).
Now it’s time to create a user-friendly interface using Streamlit.
Follow our page on LinkedIn for more content like this! 😉
But first, what is Streamlit?
Streamlit is a Python framework that allows you to quickly and easily transform simple scripts into interactive web applications.
It was created to make life easier for data scientists, machine learning engineers, and developers who want to build visual interfaces for their projects without having to deal with front-end, HTML, or JavaScript.
With just a few lines of code, we can create sliders, text fields, buttons, and dynamic charts. And the best part: all of this runs locally in the browser, updating in real time as the data changes.
In this tutorial, we will use Streamlit to create the interface for our chatbot, a lightweight and intuitive page where the user can interact with our LLM.
Ready to put your chatbot online? Let’s get started!
Preparing the environment
First, let’s install the necessary dependencies:
pip install langchain langchain-google-genai streamlit
Now let’s create a file named app.py
with the following code:
import streamlit as st
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, AIMessage
import os
# Page configuration
st.set_page_config(
page_title="Chatbot with LangChain and Streamlit",
page_icon="🤖",
layout="centered"
)
# Application title
st.title("Chatbot")
st.markdown("*Chatbot with LangChain and Streamlit*")
# Sidebar for settings
with st.sidebar:
st.header("⚙️ Settings")
# Field for API Key
api_key = st.text_input(
"Google API Key",
type="password",
help="Paste your Google AI Studio API key here"
)
if st.button("Clear Conversation"):
st.session_state.messages = []
st.rerun()
st.markdown("---")
st.markdown("### 📝 How to use:")
st.markdown("""
1. Enter your Google API Key
2. Type your message
3. Press Enter or click Send
""")
st.markdown("---")
st.markdown("### 🔗 Useful links:")
st.markdown("[Get API Key](https://aistudio.google.com/app/apikey)")
# Initialize message history
if "messages" not in st.session_state:
st.session_state.messages = []
# Function to initialize the model
@st.cache_resource
def init_model(api_key):
if not api_key:
return None
try:
llm = ChatGoogleGenerativeAI(
model="gemini-1.5-flash",
google_api_key=api_key,
temperature=0.3,
max_tokens=250
)
return llm
except Exception as e:
st.error(f"Error initializing model: {str(e)}")
return None
# Check if the API key has been provided
if not api_key:
st.warning("⚠️ Please enter your Google API Key in the sidebar to get started.")
st.stop()
# Initialize the model
llm = init_model(api_key)
if not llm:
st.error("❌ Could not initialize the model. Please check your API Key.")
st.stop()
# Container for the chat
chat_container = st.container()
# Display messages from history
with chat_container:
for message in st.session_state.messages:
if message["role"] == "user":
with st.chat_message("user"):
st.write(message["content"])
else:
with st.chat_message("assistant"):
st.write(message["content"])
# User input
if prompt := st.chat_input("Type your message here..."):
# Add user message to history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message
with st.chat_message("user"):
st.write(prompt)
# Generate assistant response
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
try:
# Prepare the history for the model
history_for_model = []
for msg in st.session_state.messages:
if msg["role"] == "user":
history_for_model.append(HumanMessage(content=msg["content"]))
else:
history_for_model.append(AIMessage(content=msg["content"]))
# Generate response
response = llm.invoke(history_for_model)
assistant_response = response.content
# Display response
st.write(assistant_response)
# Add response to history
st.session_state.messages.append({
"role": "assistant",
"content": assistant_response
})
except Exception as e:
st.error(f"Error generating response: {str(e)}")
Let's run the command in the terminal:
>> streamlit run app.py
This command runs the app.py
file and opens an interactive web interface in the browser using Streamlit, on port 8501.
Go to http://localhost:8501, enter your Google API_KEY in the menu on the left, and you will be able to chat with the chatbot:
This is a simple and functional example of a complete chatbot: an LLM running in the background and a web interface created with Streamlit!
Now, let's move to the next step: creating a chatbot with RAG, where the user can chat with their documents. Press CTRL+C in the terminal to stop the current execution.
To do this, we need to install a few more libraries:
pip install langchain_community PyPDF2 python-docx sentence-transformers
Now let's create a file called app_rag.py
and add this content:
Keep reading with a 7-day free trial
Subscribe to Exploring Artificial Intelligence to keep reading this post and get 7 days of free access to the full post archives.