Skip to main content

Schedule Planner Agent — LangGraph

The Schedule Planner is a LangGraph agent that checks venue availability for specific dates and time slots. It's one of seven specialist agents in the Event Planning Team — a multi-agent system where each agent is built with a different framework and coordinates through Dapr pub/sub.

This agent demonstrates the DaprWorkflowGraphRunner — a LangGraph-specific runner that wraps a compiled StateGraph in a durable workflow, plus the DaprChatModel which routes LLM calls through a Dapr conversation component.


Event Planning Team

AgentFrameworkPortPub/Sub Topics
Venue ScoutCrewAI8001venue.requestsvenue.results
Catering CoordinatorOpenAI Agents8002catering.requestscatering.results
Entertainment PlannerGoogle ADK8003entertainment.requestsentertainment.results
Budget AnalystStrands8004budget.requestsbudget.results
Schedule PlannerLangGraph8005schedule.requestsschedule.results
Invitations ManagerDapr Agents8006events.invitations.requests
Event CoordinatorDapr Agents8007Orchestrator
Decoration PlannerPydantic AI8008decorations.requestsdecorations.results

Prerequisites


Agent Code

The Schedule Planner builds a LangGraph StateGraph with an agent node and a tools node, using DaprChatModel for LLM calls and DaprWorkflowGraphRunner for durable execution.

agents/langgraph/main.py
import logging
import os

logging.basicConfig(level=logging.DEBUG)

from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, END, MessagesState
from diagrid.agent.langgraph import DaprWorkflowGraphRunner
from diagrid.agent.core.chat import DaprChatModel


@tool
def check_availability(venue: str, date: str) -> str:
"""Check venue availability for a specific date."""
return f"{venue} is available on {date}. Time slots: 9AM-1PM, 2PM-6PM, 6PM-11PM."


tools = [check_availability]
tools_by_name = {t.name: t for t in tools}
model = DaprChatModel(component_name="llm-provider").bind_tools(tools)


def call_model(state: MessagesState) -> dict:
response = model.invoke(state["messages"])
return {"messages": [response]}


def call_tools(state: MessagesState) -> dict:
last_message = state["messages"][-1]
results = []
for tc in last_message.tool_calls:
result = tools_by_name[tc["name"]].invoke(tc["args"])
results.append(ToolMessage(content=str(result), tool_call_id=tc["id"]))
return {"messages": results}


def should_use_tools(state: MessagesState) -> str:
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return "__end__"


graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_node("tools", call_tools)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_use_tools)
graph.add_edge("tools", "agent")

runner = DaprWorkflowGraphRunner(
graph=graph.compile(),
name="schedule-planner",
role="Schedule Planner",
goal="Check venue date and time availability using the check_availability tool. Provide available time slots for a given venue and date.",
)

# State + PubSub: subscribe for incoming tasks, publish results
runner.serve(
port=int(os.environ.get("APP_PORT", "8005")),
input_mapper=lambda req: {"messages": [HumanMessage(content=req["task"])]},
pubsub_name="agent-pubsub",
subscribe_topic="schedule.requests",
publish_topic="schedule.results",
)

What's happening

  1. @tool — LangChain tool decorator that the graph can call during execution
  2. DaprChatModel — Routes LLM calls through a Dapr conversation component instead of calling OpenAI directly. Swap LLM providers by changing YAML config.
  3. StateGraph — LangGraph state machine with an agent node (calls the LLM) and a tools node (executes tool calls), connected by conditional edges
  4. DaprWorkflowGraphRunner — LangGraph-specific runner that wraps the compiled graph in a durable workflow with crash recovery
  5. input_mapper — Transforms incoming pub/sub messages into the MessagesState format LangGraph expects
  6. runner.serve() — Starts an HTTP server that subscribes to schedule.requests and publishes results to schedule.results

Dapr Components

The agent uses shared Dapr components for pub/sub messaging, state persistence, and LLM access.

Pub/Sub (agent-pubsub)

resources/agent-pubsub.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agent-pubsub
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""

Routes messages between agents. Locally uses Redis; in production, swap for Kafka, RabbitMQ, or any Dapr pub/sub broker.

State Store (agent-memory)

resources/agent-memory.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: agent-memory
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: localhost:6379
- name: redisPassword
value: ""
- name: actorStateStore
value: "false"

Persists agent memory and conversation state across invocations.

LLM Provider

resources/llm-provider.yaml
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: llm-provider
spec:
type: conversation.openai
version: v1
metadata:
- name: key
value: "{{OPENAI_API_KEY}}"
- name: model
value: gpt-4.1-2025-04-14

Provides LLM access through Dapr's conversation API. Used by DaprChatModel to route LLM calls.


Run the Agent

1

Clone the quickstart

git clone https://github.com/diagridio/catalyst-quickstarts.git
cd catalyst-quickstarts/agents/langgraph
2

Set your API key

export OPENAI_API_KEY="your-key-here"
3

Install dependencies

pip install -r requirements.txt
4

Start the agent

diagrid dev run -f dev-python-langgraph.yaml
5

Test the agent

Send a schedule check request:

curl -X POST http://localhost:8888/agent/run \
-H "Content-Type: application/json" \
-d '{"task": "Check availability for Grand Ballroom on March 15, 2026"}'

Run with the Full Team

To run all seven specialist agents together with the orchestrator, see the Event Planning Team overview.


Key Concepts

ConceptDescription
Durable Graph ExecutionDaprWorkflowGraphRunner checkpoints every graph node — survives crashes mid-execution
Dapr Chat ModelDaprChatModel routes LLM calls through Dapr — swap providers by changing YAML config
Pub/Sub DecouplingAgents communicate through topics, not direct calls. Add or remove agents without code changes.
Portable InfrastructureSwap message brokers, state stores, and LLM providers by changing YAML — agent code stays the same

Next Steps