Schedule Planner Agent — LangGraph
The Schedule Planner is a LangGraph agent that checks venue availability for specific dates and time slots. It's one of seven specialist agents in the Event Planning Team — a multi-agent system where each agent is built with a different framework and coordinates through Dapr pub/sub.
This agent demonstrates the DaprWorkflowGraphRunner — a LangGraph-specific runner that wraps a compiled StateGraph in a durable workflow, plus the DaprChatModel which routes LLM calls through a Dapr conversation component.
Event Planning Team
| Agent | Framework | Port | Pub/Sub Topics |
|---|---|---|---|
| Venue Scout | CrewAI | 8001 | venue.requests → venue.results |
| Catering Coordinator | OpenAI Agents | 8002 | catering.requests → catering.results |
| Entertainment Planner | Google ADK | 8003 | entertainment.requests → entertainment.results |
| Budget Analyst | Strands | 8004 | budget.requests → budget.results |
| Schedule Planner | LangGraph | 8005 | schedule.requests → schedule.results |
| Invitations Manager | Dapr Agents | 8006 | events.invitations.requests |
| Event Coordinator | Dapr Agents | 8007 | Orchestrator |
| Decoration Planner | Pydantic AI | 8008 | decorations.requests → decorations.results |
Prerequisites
- Python 3.11+
- Diagrid CLI installed and initialized (
diagrid dev init) - An OpenAI API key
Agent Code
The Schedule Planner builds a LangGraph StateGraph with an agent node and a tools node, using DaprChatModel for LLM calls and DaprWorkflowGraphRunner for durable execution.
import logging
import os
logging.basicConfig(level=logging.DEBUG)
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, END, MessagesState
from diagrid.agent.langgraph import DaprWorkflowGraphRunner
from diagrid.agent.core.chat import DaprChatModel
@tool
def check_availability(venue: str, date: str) -> str:
"""Check venue availability for a specific date."""
return f"{venue} is available on {date}. Time slots: 9AM-1PM, 2PM-6PM, 6PM-11PM."
tools = [check_availability]
tools_by_name = {t.name: t for t in tools}
model = DaprChatModel(component_name="llm-provider").bind_tools(tools)
def call_model(state: MessagesState) -> dict:
response = model.invoke(state["messages"])
return {"messages": [response]}
def call_tools(state: MessagesState) -> dict:
last_message = state["messages"][-1]
results = []
for tc in last_message.tool_calls:
result = tools_by_name[tc["name"]].invoke(tc["args"])
results.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
return {"messages": results}
def should_use_tools(state: MessagesState) -> str:
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return "__end__"
graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_node("tools", call_tools)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_use_tools)
graph.add_edge("tools", "agent")
runner = DaprWorkflowGraphRunner(
graph=graph.compile(),
name="schedule-planner",
role="Schedule Planner",
goal="Check venue date and time availability using the check_availability tool. Provide available time slots for a given venue and date.",
)
# State + PubSub: subscribe for incoming tasks, publish results
runner.serve(
port=int(os.environ.get("APP_PORT", "8005")),
input_mapper=lambda req: {"messages": [HumanMessage(content=req["task"])]},
pubsub_name="agent-pubsub",
subscribe_topic="schedule.requests",
publish_topic="schedule.results",
)
What's happening
@tool— LangChain tool decorator that the graph can call during executionDaprChatModel— Routes LLM calls through a Dapr conversation component instead of calling OpenAI directly. Swap LLM providers by changing YAML config.StateGraph— LangGraph state machine with anagentnode (calls the LLM) and atoolsnode (executes tool calls), connected by conditional edgesDaprWorkflowGraphRunner— LangGraph-specific runner that wraps the compiled graph in a durable workflow with crash recoveryinput_mapper— Transforms incoming pub/sub messages into theMessagesStateformat LangGraph expectsrunner.serve()— Starts an HTTP server that subscribes toschedule.requestsand publishes results toschedule.results
Dapr Components
The agent uses shared Dapr components for pub/sub messaging, state persistence, and LLM access.
Pub/Sub (agent-pubsub)
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agent-pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
Routes messages between agents. Locally uses Redis; in production, swap for Kafka, RabbitMQ, or any Dapr pub/sub broker.
State Store (agent-memory)
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agent-memory
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "false"
Persists agent memory and conversation state across invocations.
LLM Provider
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: llm-provider
spec:
type: conversation.openai
version: v1
metadata:
- name: key
value: "{{OPENAI_API_KEY}}"
- name: model
value: gpt-4.1-2025-04-14
Provides LLM access through Dapr's conversation API. Used by DaprChatModel to route LLM calls.
Run the Agent
Clone the quickstart
git clone https://github.com/diagridio/catalyst-quickstarts.git
cd catalyst-quickstarts/agents/langgraph
Set your API key
export OPENAI_API_KEY="your-key-here"
Install dependencies
pip install -r requirements.txt
Start the agent
diagrid dev run -f dev-python-langgraph.yaml
Test the agent
Send a schedule check request:
curl -X POST http://localhost:8888/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "Check availability for Grand Ballroom on March 15, 2026"}'
Run with the Full Team
To run all seven specialist agents together with the orchestrator, see the Event Planning Team overview.
Key Concepts
| Concept | Description |
|---|---|
| Durable Graph Execution | DaprWorkflowGraphRunner checkpoints every graph node — survives crashes mid-execution |
| Dapr Chat Model | DaprChatModel routes LLM calls through Dapr — swap providers by changing YAML config |
| Pub/Sub Decoupling | Agents communicate through topics, not direct calls. Add or remove agents without code changes. |
| Portable Infrastructure | Swap message brokers, state stores, and LLM providers by changing YAML — agent code stays the same |
Next Steps
- CLI Quickstart — Deploy a durable agent to Catalyst in minutes
- Session Management — Add persistent memory to your LangGraph agents
- Event Planning Team — See all agents working together