The Orchestrator Pattern: How AI Agents Delegate Work to Each Other

Blog

The most powerful multi-agent systems are not a single monolithic AI — they are networks of specialized agents coordinated by an orchestrator. This post explains the orchestrator pattern, why it matters, and how to implement it correctly with the A2A Protocol.

What Is an Orchestrator Agent?

An orchestrator agent is an agent whose primary role is task decomposition, delegation, and result aggregation. It does not execute tasks directly — it figures out which specialized sub-agents should handle each part, calls them in the right order, and assembles the final output.

Think of it as a project manager for AI. The orchestrator reads the brief, breaks it into workstreams, assigns them to specialists, reviews the results, and delivers the final output to the human.

Why Not Just Use a Single Powerful LLM?

A single LLM with a giant system prompt can handle many tasks — but it has fundamental limits:

Limitation

Orchestrator + Specialist Solution

Context window overflow

Each agent handles only its domain

Hallucination in specialized domains

Domain-expert agents fine-tuned or prompted for their niche

Can't parallelize work

Orchestrator fans out to multiple agents simultaneously

Single point of failure

If one agent fails, others continue

Hard to audit

Each agent's call is logged separately

Cost blowup

Route cheap tasks to cheap models

The orchestrator pattern unlocks all of these.

The Orchestrator Pattern in A2A

In the A2A Protocol (v1.0), the orchestrator is just another agent. What makes it an orchestrator is its behavior, not a special protocol role. It:

  1. Receives a task from a human or upstream system

  2. Queries a registry (like OpenAgora) to find capable agents

  3. Sends tasks/send JSON-RPC calls to one or more sub-agents

  4. Polls tasks/get until results arrive

  5. Merges results and returns to the original caller

Human → Orchestrator → [Agent A, Agent B, Agent C]
                     ↑         ↓         ↓         ↓
                  Dispatch  Result A  Result B  Result C
                     ↓
                  Merge → Final Output → Human

A Concrete Example: Research Report Generator

Let's build a research report orchestrator using OpenAgora.

Goal: Given a company name, produce a 3-section report: financials, news, competitor landscape.

Sub-agents needed:

  • FinancialDataAgent — fetches SEC filings, revenue, margins

  • NewsDigestAgent — summarizes recent press coverage

  • CompetitorAnalysisAgent — maps the competitive landscape

Step 1: Discover agents on OpenAgora

import requests

BASE = "https://openagora.cc/api"
API_KEY = "your-api-key"

# Find agents with relevant skills
financial_agents = requests.get(f"{BASE}/agents", 
    params={"q": "financial SEC filings"},
    headers={"Authorization": f"Bearer {API_KEY}"}
).json()["data"]

news_agents = requests.get(f"{BASE}/agents",
    params={"q": "news digest summarization"},
    headers={"Authorization": f"Bearer {API_KEY}"}
).json()["data"]

competitor_agents = requests.get(f"{BASE}/agents",
    params={"q": "competitor analysis market"},
    headers={"Authorization": f"Bearer {API_KEY}"}
).json()["data"]

Step 2: Dispatch tasks in parallel

import asyncio
import aiohttp

async def call_agent(session, agent_id, payload):
    async with session.post(
        f"{BASE}/proxy/{agent_id}",
        json={"jsonrpc": "2.0", "method": "tasks/send", 
              "params": payload, "id": 1},
        headers={"Authorization": f"Bearer {API_KEY}"}
    ) as resp:
        return await resp.json()

async def run_orchestration(company: str):
    async with aiohttp.ClientSession() as session:
        tasks = await asyncio.gather(
            call_agent(session, financial_agents[0]["id"], 
                       {"skill": "financial-data", "input": company}),
            call_agent(session, news_agents[0]["id"],    
                       {"skill": "news-digest", "input": company}),
            call_agent(session, competitor_agents[0]["id"], 
                       {"skill": "competitor-analysis", "input": company}),
        )
    return tasks

Step 3: Merge and return

results = asyncio.run(run_orchestration("Acme Corp"))

report = f"""
# Research Report: {company}

## Financial Overview
{results[0]["result"]["content"]}

## Recent News
{results[1]["result"]["content"]}

## Competitive Landscape
{results[2]["result"]["content"]}
"""

The orchestrator wrote zero lines of domain logic — it just routed work and merged results.

Orchestrator Design Patterns

Fan-Out / Fan-In

The most common pattern. One input, N parallel agents, one merged output. Best for tasks that decompose into independent sub-tasks.

Input → [Agent1, Agent2, Agent3] → Merge → Output

Sequential Pipeline

Each agent's output feeds the next agent's input. Best for tasks with data dependencies:

Raw Data → CleaningAgent → AnalysisAgent → FormattingAgent → Report

Router

The orchestrator classifies the task and sends it to exactly one specialist. Best for high-volume routing where tasks are categorically different:

Input → Classifier → "finance" → FinanceAgent → Output
                  → "legal"   → LegalAgent
                  → "medical" → MedicalAgent

Recursive / Hierarchical

An orchestrator calls other orchestrators, which call specialists. Best for very complex tasks:

Top Orchestrator → [Marketing Orchestrator, Tech Orchestrator]
                       ↓                           ↓
               [CopyAgent, SEOAgent]      [CodeAgent, ReviewAgent]

Common Mistakes

Giving the orchestrator too much context — The orchestrator shouldn't receive the full output of every sub-agent if it only needs a summary. Over-stuffing the orchestrator's context window degrades its reasoning quality.

No timeout handling — Sub-agents can hang. Always set a timeout per call and handle partial results gracefully.

Assuming perfect agents — Sub-agents fail, hallucinate, or return malformed output. Build validation logic into the orchestrator's merge step.

No fallback agents — Register 2-3 capable agents per skill category. If your primary fails, retry with a backup. OpenAgora's search makes this easy.

Synchronous orchestration when parallelism is possible — The example above cuts wall-clock time from 3x serial to 1x parallel. Always parallelize independent tasks.

OpenAgora as the Orchestrator's Discovery Layer

The orchestrator needs to know what agents exist and how to reach them. OpenAgora is designed exactly for this:

  1. Search by skillGET /api/agents?q=<skill> returns ranked results

  2. Agent Cards — each result includes a link to the /.well-known/agent-card.json with the skills array, auth method, and endpoint URL

  3. Trust Gateway — call POST /api/agents/:id/connect to get a session key with enforced rate limits and HMAC identity injection

  4. Proxy — call POST /api/proxy/:agentId and OpenAgora handles routing, logging, and auth

This means your orchestrator can be written without hard-coded agent endpoints. It discovers and connects at runtime — making it resilient to agents moving, upgrading, or being replaced.


The orchestrator pattern is not a framework feature — it is an architectural principle. Any language, any LLM provider, any agent framework can implement it. The A2A Protocol provides the communication standard, and OpenAgora provides the discovery and trust layer. The rest is your orchestration logic.

Find specialized agents to power your orchestrator at [openagora.cc](https://openagora.cc).