AI Agents
Agent frameworks like CrewAI, LangGraph, and others give you tools, LLM orchestration, and prompt management — but none of them handle what happens when things go wrong in production. Some offer basic checkpointing, but you still need to detect failures at scale, build your own recovery mechanisms, and coordinate resumption across instances to avoid duplicate runs.
Catalyst adds the missing infrastructure: automatic failure detection, automatic recovery at scale, and multi-instance agent coordination. Your agent code stays the same — Catalyst handles the rest.
Pick your framework below to build a durable agent connected to Catalyst Cloud.
Catalyst Cloud is free and the fastest way to get started — no infrastructure to set up. For production or on-premises requirements, Diagrid also offers self-hosted enterprise deployments.
Prerequisites
- Diagrid Catalyst account
- Diagrid CLI
- Python 3.11+
- uv package manager
- An OpenAI API key (or Google AI API key for Google ADK)
1. Log in to Catalyst
diagrid login
Confirm your identity:
diagrid whoami
2. Clone and Navigate
git clone https://github.com/diagridio/catalyst-quickstarts.git
Navigate to the quickstart directory for your framework:
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
cd catalyst-quickstarts/agents/dapr-agents/durable-agent
cd catalyst-quickstarts/agents/crewai
cd catalyst-quickstarts/agents/langgraph
cd catalyst-quickstarts/agents/strands
cd catalyst-quickstarts/agents/openai-agents
cd catalyst-quickstarts/agents/adk
cd catalyst-quickstarts/agents/pydantic-ai
3. Explore the Code
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
Invitations Manager — a durable agent that sends event invitations to guests via email and physical mail. Dapr Agents is the native AI agent framework built on Dapr — durability, state, and pub/sub are built into the agent itself.
Open main.py. The agent uses Pydantic models for structured tool input and output:
from dapr_agents import tool, DurableAgent
from dapr_agents.llm import DaprChatClient
class InvitationSchema(BaseModel):
guest_count: int = Field(description="Number of guests to invite")
event_type: str = Field(description="Type of event")
@tool(args_model=InvitationSchema)
def send_invitations(guest_count: int, event_type: str) -> List[InvitationResult]:
"""Send event invitations to guests."""
return [
InvitationResult(sent=int(guest_count * 0.7), method="email"),
InvitationResult(sent=int(guest_count * 0.3), method="physical mail"),
]
The DurableAgent class brings everything together — memory, state, registry, and pub/sub are all configured at the agent level:
agent = DurableAgent(
name="invitations-manager",
role="Invitations Manager",
goal="Send event invitations to guests using the send_invitations tool.",
tools=[send_invitations],
llm=DaprChatClient(component_name="llm-provider"),
memory=AgentMemoryConfig(
store=ConversationDaprStateMemory(store_name="agent-workflow")
),
state=AgentStateConfig(
store=StateStoreService(store_name="agent-memory"),
),
registry=AgentRegistryConfig(
store=StateStoreService(store_name="agent-registry"),
),
pubsub=AgentPubSubConfig(
pubsub_name="agent-pubsub",
agent_topic="events.invitations.requests",
broadcast_topic="agents.broadcast",
),
)
runner = AgentRunner()
runner.serve(agent, port=8006)
The LLM is configured via DaprChatClient(component_name="llm-provider") — a Dapr component in resources/llm-provider.yaml that references your OpenAI API key:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: llm-provider
spec:
type: conversation.openai
metadata:
- name: key
value: "{{OPENAI_API_KEY}}"
- name: model
value: gpt-4.1-2025-04-14
Unlike other frameworks, Dapr Agents has durability, state, pub/sub, and failure recovery built in natively. Automatic failure detection, crash recovery, and multi-instance coordination are all handled out of the box — no wrapper needed.
Venue Scout — a CrewAI agent that searches for event venues by city and guest capacity.
Open main.py. The first part is a standard CrewAI agent — a tool and an agent:
from crewai import Agent
from crewai.tools import tool
@tool("Search venues")
def search_venues(city: str, capacity: int) -> str:
"""Search for event venues in a city with given capacity."""
return (
f"Found 3 venues in {city} for {capacity} guests:\n"
f"1. Grand Ballroom - $5,000/day, capacity {capacity+20}\n"
f"2. Riverside Conference Center - $3,500/day, capacity {capacity+10}\n"
f"3. Downtown Loft Space - $2,000/day, capacity {capacity}"
)
agent = Agent(
role="Venue Scout",
goal="Search for event venues by city and guest capacity.",
backstory="You are an expert venue finder with knowledge of event spaces.",
tools=[search_venues],
llm="gpt-4o-mini",
)
This is a standard CrewAI agent. The following lines are all you need to make it durable:
from diagrid.agent.crewai import DaprWorkflowAgentRunner
from diagrid.agent.core.state import DaprStateStore
runner = DaprWorkflowAgentRunner(
name="venue-scout",
agent=agent,
state_store=DaprStateStore(store_name="agent-memory"),
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
CrewAI gives you multi-agent crews and tool orchestration but has no built-in durability. The DaprWorkflowAgentRunner wraps your existing agent — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
Schedule Planner — a LangGraph agent that checks venue availability for specific dates and returns available time slots.
Open main.py. The first part is a standard LangGraph agent — a tool, model, and state graph:
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, MessagesState
from langchain_openai import ChatOpenAI
@tool
def check_availability(venue: str, date: str) -> str:
"""Check venue availability for a specific date."""
return f"{venue} is available on {date}. Time slots: 9AM-1PM, 2PM-6PM, 6PM-11PM."
tools = [check_availability]
model = ChatOpenAI(model="gpt-4.1-2025-04-14").bind_tools(tools)
graph = StateGraph(MessagesState)
graph.add_node("agent", call_model)
graph.add_node("tools", call_tools)
graph.add_edge(START, "agent")
graph.add_conditional_edges("agent", should_use_tools)
graph.add_edge("tools", "agent")
This is a standard LangGraph agent. The following lines are all you need to make it durable:
from diagrid.agent.langgraph import DaprWorkflowGraphRunner
runner = DaprWorkflowGraphRunner(
graph=graph.compile(),
name="schedule-planner",
role="Schedule Planner",
goal="Check venue date and time availability.",
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
LangGraph gives you graph-based orchestration with conditional routing but has no built-in durability. The DaprWorkflowGraphRunner wraps your compiled graph — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
Budget Analyst — a Strands agent that calculates event budgets with detailed breakdowns.
Open main.py. The first part is a standard Strands agent — a tool and an agent:
from strands import Agent, tool
from strands.models.openai import OpenAIModel
@tool
def calculate_budget(items: str) -> str:
"""Calculate total budget from a comma-separated list of cost items."""
return (
"Budget breakdown:\n"
"- Venue: $3,500\n- Catering: $2,250\n- Entertainment: $1,500\n"
"- Decorations: $800\n- Miscellaneous: $500\n"
"Total: $8,550\nRecommended buffer (15%): $1,283\nGrand Total: $9,833"
)
agent = Agent(
model=OpenAIModel(model_id="gpt-4o-mini"),
tools=[calculate_budget],
system_prompt="You are a budget analyst specializing in event planning.",
)
This is a standard Strands agent. The following lines are all you need to make it durable:
from diagrid.agent.strands import DaprWorkflowAgentRunner, DaprStateSessionManager
from diagrid.agent.core.state import DaprStateStore
session_manager = DaprStateSessionManager(store_name="agent-memory")
agent = Agent(..., hooks=[session_manager])
runner = DaprWorkflowAgentRunner(
name="budget-planner",
agent=agent,
state_store=DaprStateStore(store_name="agent-memory"),
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
Strands gives you a model-driven agent framework with tool use but has no built-in durability. The DaprWorkflowAgentRunner wraps your existing agent — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
Catering Coordinator — an OpenAI Agents agent that searches for catering options by cuisine type and guest count.
Open main.py. The first part is a standard OpenAI Agents agent — a tool and an agent:
from agents import Agent, function_tool
@function_tool
def search_catering(cuisine: str, guest_count: int) -> str:
"""Search for catering options by cuisine type and guest count."""
return (
f"Found catering options for {guest_count} guests ({cuisine}):\n"
f"1. Elite Catering Co - ${guest_count * 45}/event, full service\n"
f"2. Farm Fresh Events - ${guest_count * 35}/event, organic menu\n"
f"3. Quick Bites Catering - ${guest_count * 25}/event, casual buffet"
)
agent = Agent(
name="catering-coordinator",
model="gpt-4o-mini",
instructions="You are a catering coordinator. When asked to find catering, use the search_catering tool.",
tools=[search_catering],
)
This is a standard OpenAI Agents agent. The following lines are all you need to make it durable:
from diagrid.agent.openai_agents import DaprWorkflowAgentRunner
from diagrid.agent.core.state import DaprStateStore
runner = DaprWorkflowAgentRunner(
name="catering-coordinator",
agent=agent,
state_store=DaprStateStore(store_name="agent-memory"),
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
The OpenAI Agents SDK gives you function tools and agent handoffs but has no built-in durability. The DaprWorkflowAgentRunner wraps your existing agent — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
Entertainment Planner — a Google ADK agent that finds entertainment options for events.
Open main.py. The first part is a standard Google ADK agent — a tool function and an LLM agent:
from google.adk.agents import LlmAgent
from google.adk.tools import FunctionTool
def find_entertainment(event_type: str) -> str:
"""Find entertainment options for an event type."""
return (
f"Entertainment options for {event_type}:\n"
f"1. Live Jazz Band 'Blue Notes' - $2,500 for 3 hours\n"
f"2. DJ & Sound System - $1,200 for 4 hours\n"
f"3. Stand-up Comedian - $1,800 for 1 hour set"
)
agent = LlmAgent(
name="entertainment_planner",
model="gemini-2.0-flash",
instruction="You are an entertainment planner. Use the find_entertainment tool.",
tools=[FunctionTool(find_entertainment)],
)
This is a standard Google ADK agent. The following lines are all you need to make it durable:
from diagrid.agent.adk import DaprWorkflowAgentRunner
from diagrid.agent.core.state import DaprStateStore
runner = DaprWorkflowAgentRunner(
name="entertainment-planner",
agent=agent,
state_store=DaprStateStore(store_name="agent-memory"),
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
Google ADK gives you a comprehensive agent development kit with Gemini integration but has no built-in durability. The DaprWorkflowAgentRunner wraps your existing agent — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
Decoration Planner — a Pydantic AI agent that searches for decoration packages by theme and venue size.
Open main.py. The first part is a standard Pydantic AI agent — a tool and an agent:
from pydantic_ai import Agent
def search_decorations(theme: str, venue_size: str) -> str:
"""Search for decoration packages by theme and venue size."""
return (
f"Found decoration packages for '{theme}' theme ({venue_size} venue):\n"
f"1. Elegant Events Decor - Premium {theme} package, full setup & teardown\n"
f"2. Budget Blooms - Affordable {theme} florals and centerpieces\n"
f"3. Grand Occasions - Luxury {theme} decor with lighting and draping"
)
agent = Agent(
"openai:gpt-4o-mini",
system_prompt="You are a decoration planner. Use the search_decorations tool.",
tools=[search_decorations],
)
This is a standard Pydantic AI agent. The following lines are all you need to make it durable:
from diagrid.agent.pydantic_ai import DaprWorkflowAgentRunner
from diagrid.agent.core.state import DaprStateStore
runner = DaprWorkflowAgentRunner(
name="decoration-planner",
agent=agent,
state_store=DaprStateStore(store_name="agent-memory"),
)
runner.serve(port=8888, pubsub_name="agent-pubsub")
Pydantic AI gives you a type-safe agent framework with structured outputs but has no built-in durability. The DaprWorkflowAgentRunner wraps your existing agent — no code changes needed — and Catalyst adds automatic failure detection, crash recovery, and multi-instance coordination.
4. Configure API Key
This quickstart uses OpenAI as the LLM provider (or Google Gemini for ADK). Catalyst is LLM-agnostic — you're free to use any provider supported by your chosen framework.
- macOS/Linux
- Windows
export OPENAI_API_KEY="your-openai-api-key"
If you selected Google ADK, set the Google API key instead:
export GOOGLE_API_KEY="your-google-api-key"
$env:OPENAI_API_KEY="your-openai-api-key"
If you selected Google ADK, set the Google API key instead:
$env:GOOGLE_API_KEY="your-google-api-key"
5. Install Dependencies
uv venv
uv pip install -r requirements.txt
6. Run with Catalyst Cloud
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
diagrid dev run -f dev-python-durable-agent.yaml --project durable-agent-qs --approve
diagrid dev run -f dev-python-crewai.yaml --project crewai-agent-qs --approve
diagrid dev run -f dev-python-langgraph.yaml --project langgraph-agent-qs --approve
diagrid dev run -f dev-python-strands.yaml --project strands-agent-qs --approve
diagrid dev run -f dev-python-openai.yaml --project openai-agent-qs --approve
diagrid dev run -f dev-python-adk.yaml --project adk-agent-qs --approve
diagrid dev run -f dev-python-pydantic-ai.yaml --project pydantic-agent-qs --approve
diagrid dev run runs your code locally and connects it to the Catalyst Cloud workflow engine. Your agent code never leaves your machine — only workflow state is stored in Catalyst.
Wait for the log output indicating the runner is ready before proceeding.
7. Interact with the Agent
Open a new terminal and trigger the agent:
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
curl -X POST http://localhost:8006/run \
-H "Content-Type: application/json" \
-d '{"task": "Send invitations to 100 guests for a corporate networking event"}'
Expected output:
== APP == Invitations sent: 70 via email, 30 via physical mail
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Find venues in San Francisco for 200 guests"}'
Expected output:
== APP == Found 3 venues in San Francisco for 200 guests:
== APP == 1. Grand Ballroom - $5,000/day, capacity 220
== APP == 2. Riverside Conference Center - $3,500/day, capacity 210
== APP == 3. Downtown Loft Space - $2,000/day, capacity 200
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Check availability of Grand Ballroom on March 15"}'
Expected output:
== APP == Grand Ballroom is available on March 15. Time slots: 9AM-1PM, 2PM-6PM, 6PM-11PM.
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Calculate a budget for a corporate conference"}'
Expected output:
== APP == Budget breakdown:
== APP == - Venue: $3,500
== APP == - Catering: $2,250
== APP == - Entertainment: $1,500
== APP == - Decorations: $800
== APP == - Miscellaneous: $500
== APP == Total: $8,550
== APP == Recommended buffer (15%): $1,283
== APP == Grand Total: $9,833
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Find Italian catering for 150 guests"}'
Expected output:
== APP == Found catering options for 150 guests (Italian):
== APP == 1. Elite Catering Co - $6,750/event, full service
== APP == 2. Farm Fresh Events - $5,250/event, organic menu
== APP == 3. Quick Bites Catering - $3,750/event, casual buffet
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Find entertainment for a corporate gala"}'
Expected output:
== APP == Entertainment options for corporate gala:
== APP == 1. Live Jazz Band 'Blue Notes' - $2,500 for 3 hours
== APP == 2. DJ & Sound System - $1,200 for 4 hours
== APP == 3. Stand-up Comedian - $1,800 for 1 hour set
curl -X POST http://localhost:8888/run \
-H "Content-Type: application/json" \
-d '{"task": "Find decorations with an elegant theme for a large venue"}'
Expected output:
== APP == Found decoration packages for 'elegant' theme (large venue):
== APP == 1. Elegant Events Decor - Premium elegant package, full setup & teardown
== APP == 2. Budget Blooms - Affordable elegant florals and centerpieces
== APP == 3. Grand Occasions - Luxury elegant decor with lighting and draping
8. Crash Recovery
Stop the running application with Ctrl+C.
The quickstart repository includes a crash_test.py file that demonstrates crash recovery. It defines a 3-step pipeline where step 2 crashes with os._exit(1). After the crash, you comment out the crash line and restart — the workflow resumes from step 2 without re-executing step 1.
Remember: your code runs locally throughout this test. The Catalyst Cloud workflow engine — not your machine — tracks which steps completed and stores their results. That's what makes recovery possible even after a full process crash.
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
Crash recovery is built into Dapr Agents natively — the DurableAgent class automatically persists each tool execution as a workflow activity. If the process crashes, it resumes from the last saved state. See the Durable Agent Quickstart for a detailed walkthrough.
CrewAI does not natively offer crash recovery. With Catalyst, each completed tool call is persisted — so on restart the workflow resumes exactly where it left off.
crash_test.py defines 3 tools:
- step_one_search — searches for venues (completes successfully)
- step_two_compare — compares venues (crashes with
os._exit(1)) - step_three_confirm — confirms the booking
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project crewai-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Find a venue in Austin for a company gala"}'
You'll see tool 1 complete, then the process crashes at tool 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project crewai-agent-qs --approve
The workflow resumes from tool 2 — tool 1 is not re-executed:
== APP == >>> TOOL 2: Comparing venues...
== APP == >>> TOOL 3: Confirming booking...
LangGraph does not natively offer crash recovery. With Catalyst, each completed graph node is persisted — so on restart the pipeline resumes exactly where it left off.
crash_test.py defines a 3-node graph:
- check_venues — checks venue availability (completes successfully)
- compare_options — compares options (crashes with
os._exit(1)) - confirm_booking — confirms the booking
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project langgraph-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"topic": "company gala on March 15"}'
You'll see step 1 complete, then the process crashes at step 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project langgraph-agent-qs --approve
The workflow resumes from step 2 — step 1 is not re-executed:
== APP == >>> STEP 2: Comparing venue options...
== APP == >>> STEP 2 COMPLETE: Grand Ballroom (6PM-11PM) is the best option for 200 guests
== APP == >>> STEP 3: Confirming booking...
== APP == >>> STEP 3 COMPLETE: Booking confirmed: Grand Ballroom, March 15, 6PM-11PM
Strands does not natively offer crash recovery. With Catalyst, each completed tool call is persisted — so on restart the workflow resumes exactly where it left off.
crash_test.py defines 3 tools:
- step_one_calculate — calculates initial budget (completes successfully)
- step_two_analyze — analyzes costs (crashes with
os._exit(1)) - step_three_finalize — finalizes the budget report
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project strands-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Calculate a budget for a corporate retreat with venue, catering, and entertainment"}'
You'll see tool 1 complete, then the process crashes at tool 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project strands-agent-qs --approve
The workflow resumes from tool 2 — tool 1 is not re-executed:
== APP == >>> TOOL 2: Analyzing costs...
== APP == >>> TOOL 3: Finalizing budget...
The OpenAI Agents SDK does not natively offer crash recovery. With Catalyst, each completed tool call is persisted — so on restart the workflow resumes exactly where it left off.
crash_test.py defines 3 tools:
- step_one_search — searches for catering options (completes successfully)
- step_two_compare — compares options (crashes with
os._exit(1)) - step_three_confirm — confirms the booking
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project openai-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Find catering for a corporate gala"}'
You'll see tool 1 complete, then the process crashes at tool 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project openai-agent-qs --approve
The workflow resumes from tool 2 — tool 1 is not re-executed:
== APP == >>> TOOL 2: Comparing options...
== APP == >>> TOOL 3: Confirming booking...
Google ADK does not natively offer crash recovery. With Catalyst, each completed tool call is persisted — so on restart the workflow resumes exactly where it left off.
crash_test.py defines 3 tools:
- step_one_find — finds entertainment options (completes successfully)
- step_two_compare — compares options (crashes with
os._exit(1)) - step_three_confirm — confirms the booking
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project adk-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Find entertainment for a corporate holiday party"}'
You'll see tool 1 complete, then the process crashes at tool 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project adk-agent-qs --approve
The workflow resumes from tool 2 — tool 1 is not re-executed:
== APP == >>> TOOL 2: Comparing options...
== APP == >>> TOOL 3: Confirming booking...
Pydantic AI does not natively offer crash recovery. With Catalyst, each completed tool call is persisted — so on restart the workflow resumes exactly where it left off.
crash_test.py defines 3 tools:
- step_one_search — searches for decoration packages (completes successfully)
- step_two_compare — compares packages (crashes with
os._exit(1)) - step_three_confirm — confirms the selection
First run — trigger and crash:
diagrid dev run -f dev-crash-test.yaml --project pydantic-agent-qs --approve
Wait for Runner started — ready to accept requests, then in a new terminal:
curl -X POST http://localhost:8001/run \
-H "Content-Type: application/json" \
-d '{"prompt": "Find decorations for a garden wedding theme"}'
You'll see tool 1 complete, then the process crashes at tool 2.
Fix and resume:
Open crash_test.py and comment out the crash line:
# os._exit(1) # 💥 comment out this line before the second run
Restart:
diagrid dev run -f dev-crash-test.yaml --project pydantic-agent-qs --approve
The workflow resumes from tool 2 — tool 1 is not re-executed:
== APP == >>> TOOL 2: Comparing packages...
== APP == >>> TOOL 3: Confirming selection...
You do not need to curl again — the existing workflow resumes automatically when your local process reconnects to Catalyst. Because workflow state is stored remotely in Catalyst (not in your process), the engine replays saved results instead of re-executing completed steps.
9. View in the Catalyst Web Console
Open the Catalyst Cloud web console and navigate to the Workflows section. Select the workflow instance to inspect the full execution trace, including tool calls and state persistence.
10. Clean Up
Stop the running application with Ctrl+C, then delete the Catalyst project:
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
diagrid project delete durable-agent-qs
diagrid project delete crewai-agent-qs
diagrid project delete langgraph-agent-qs
diagrid project delete strands-agent-qs
diagrid project delete openai-agent-qs
diagrid project delete adk-agent-qs
diagrid project delete pydantic-agent-qs
Summary
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
In this quickstart, you:
- Built a Dapr Agents durable agent with structured tool schemas and Dapr-native LLM configuration
- Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard CrewAI agent with
DaprWorkflowAgentRunnerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard LangGraph graph with
DaprWorkflowGraphRunnerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard Strands agent with
DaprWorkflowAgentRunnerandDaprStateSessionManagerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard OpenAI Agents agent with
DaprWorkflowAgentRunnerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard Google ADK agent with
DaprWorkflowAgentRunnerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
In this quickstart, you:
- Wrapped a standard Pydantic AI agent with
DaprWorkflowAgentRunnerfor durable execution - Ran it locally connected to Catalyst Cloud for state persistence and crash recovery
- Triggered the agent via REST API and inspected execution in the Catalyst console
Next Steps
- Dapr Agents
- CrewAI
- LangGraph
- Strands
- OpenAI Agents
- Google ADK
- Pydantic AI
- Explore the Durable Agent Quickstart for a richer example with parallel tool execution
- Try the Multi-Agent Quickstart to orchestrate multiple cooperating agents
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with CrewAI + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with LangGraph + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with Strands + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with OpenAI Agents + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with Google ADK + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes
- Learn how to build durable workflows with Pydantic AI + Dapr
- Set up access policies to secure your agent
- Learn how to deploy AI agents to Kubernetes