Table of contents
- Types of Multi-Agent Architectures:
- Key Features:
- Understanding the Building Blocks:
- Flow of the Multi-Agent-Architecture:
- Setting Up the Environment:
- The Supervisor: The Brain of the Operation
- Specialized Agents
- Quality Control: The Validator
- Building the Workflow Graph
- Creating the User Interface with Streamlit
- How to Use the System
- Benefits of This Architecture
- Conclusion
- GitHub:
A multi-agent architecture consists of multiple agents working together to solve a problem or accomplish a task.
Types of Multi-Agent Architectures:
Centralized: One central controller coordinates the activities of the agents.
Decentralized: No central controller; agents operate independently and communicate to solve tasks collaboratively.
Hybrid: A combination of centralized and decentralized methods, where some coordination is central, and others are independent.
Key Features:
Autonomy: Agents can operate independently based on their programming or environment.
Cooperation: Agents can work together or share information to reach a common goal.
Adaptability: Agents can learn from their environment or past actions and change their behavior as needed.
Understanding the Building Blocks:
Streamlit: An open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. It lets us build an interactive interface for our AI workflow.
LangGraph: A framework built on Langchain, designed specifically for creating multi-actor applications. It allows the creation of stateful graphs where different agents or functions can interact and share information.
Groq: A company that provides fast and efficient access to large language models through their API. In this example, we use their Llama 3 model.
Langchain Tools: Pre-built components that enhance the capabilities of LLMs. Here, we use:
Tavily Search: A tool for performing web searches and retrieving information.
Riza's Code Interpreter (ExecPython): A tool that lets the LLM execute Python code, enabling calculations, data analysis, and more.
Pydantic: A Python library for data validation and settings management using Python type hints. It's used here to define structured output formats for the supervisor and validator agents.
ReAct Pattern: A common approach for building agents that combine reasoning ("Think") and acting ("Act") steps to solve tasks. LangGraph simplifies the implementation of this pattern.
Flow of the Multi-Agent-Architecture:
A Supervisor orchestrates the workflow
An Enhancer refines user queries
A Researcher gathers information
A Coder handles technical tasks
A Validator ensures quality output
Setting Up the Environment:
First, we need to set up our environment with the necessary dependencies. The system requires several API keys:
from dotenv import load_dotenv
load_dotenv()
groq_api_key = os.environ.get("Groq")
riza_api_key = os.environ.get("RIZA_API_KEY")
tavily_api_key = os.environ.get("TAVILY_API_KEY")
The Supervisor: The Brain of the Operation
The Supervisor is the central orchestrator of our workflow. It makes decisions about which agent should handle the current state of the task. Here's how it works:
class Supervisor(BaseModel):
next: Literal["enhancer", "researcher", "coder"] = Field(
description="Specifies the next worker in the pipeline"
)
reason: str = Field(
description="The reason for the decision"
)
Specialized Agents
The Enhancer Agent
The Enhancer's role is to improve the quality of user queries. It takes vague or incomplete queries and transforms them into clear, actionable requests:
def enhancer_node(state: MessagesState) -> Command[Literal["supervisor"]]:
enhancer_prompt = (
"You are an advanced query enhancer. Your task is to:\n"
"1. Clarify and refine user inputs.\n"
"2. Identify any ambiguities in the query.\n"
"3. Generate a more precise and actionable version of the original request.\n"
)
The Researcher Agent
The Researcher uses the Tavily search tool to gather information:
def research_node(state: MessagesState) -> Command[Literal["validator"]]:
research_agent = create_react_agent(
llm,
tools=[tool_tavily],
state_modifier="You are a researcher. Focus on gathering information and generating content."
)
The Coder Agent
The Coder handles technical tasks using the Riza code execution tool:
def code_node(state: MessagesState) -> Command[Literal["validator"]]:
code_agent = create_react_agent(
llm,
tools=[tool_code_interpreter],
state_modifier=(
"You are a coder and analyst. Focus on mathematical calculations, "
"analyzing, solving math questions, and executing code."
)
)
Quality Control: The Validator
The Validator ensures the quality of responses before they're returned to the user:
class Validator(BaseModel):
next: Literal["supervisor", "FINISH"] = Field(
description="Specifies the next worker in the pipeline"
)
reason: str = Field(
description="The reason for the decision"
)
Building the Workflow Graph
The workflow is structured using LangGraph's StateGraph:
builder = StateGraph(MessagesState)
builder.add_node("supervisor", supervisor_node)
builder.add_node("enhancer", enhancer_node)
builder.add_node("researcher", research_node)
builder.add_node("coder", code_node)
builder.add_node("validator", validator_node)
Creating the User Interface with Streamlit
The system includes a simple but effective Streamlit interface:
st.title("LangGraph Workflow")
user_query = st.text_input("Enter your query:")
if st.button("Run Workflow"):
if user_query:
st.session_state.workflow_output = []
inputs = {"messages": [("user", user_query)]}
for output in graph.stream(inputs):
# Display output logic
How to Use the System
To use this workflow system:
- Create a
.env
file with your API keys:
Groq=your_groq_api_key
RIZA_API_KEY=your_riza_api_key
TAVILY_API_KEY=your_tavily_api_key
- Install the required packages :
langchain
langchain-groq
langchain-community
langgraph
python-dotenv
tavily-python
langchain-experimental
- Run the Streamlit app:
streamlit run app.py
- Enter your query in the text input field and click "Run Workflow"
Benefits of This Architecture
This workflow system offers several benefits:
Specialized Handling: Each agent focuses on specific tasks, which improves the quality of the outputs.
Quality Control: The Validator makes sure that responses meet quality standards.
Flexibility: You can easily modify the workflow by changing the graph structure.
Transparency: The system gives clear feedback about its decision-making process.
Conclusion
This LangGraph workflow implementation shows how to build an advanced system that can manage complex queries using specialized agents. By combining LangGraph's structured workflows with Streamlit's user interface features, it creates a powerful and easy-to-use AI application.