Skip to content

Parallel Setup Pattern - Performance Optimization Guide

Version: 1.0 Last Updated: 2025-10-21 Status: ✅ Implemented in Contextune v1.0+


Executive Summary

The Parallel Setup Pattern eliminates sequential bottlenecks in multi-agent workflows by delegating environment setup (GitHub issues, git worktrees) to individual subagents instead of the main agent.

Key Results: - 🚀 30-63% faster setup (5-20 tasks) - ⚡ O(1) scaling - setup time constant regardless of task count - 🎯 True parallelism - no sequential bottlenecks - 🔧 Simpler coordination - autonomous subagents


Table of Contents

  1. The Problem
  2. The Solution
  3. Performance Analysis
  4. Implementation Guide
  5. Architecture Diagrams
  6. Benefits & Trade-offs
  7. Challenges & Solutions
  8. Best Practices
  9. Future Optimizations

The Problem

Sequential Bottleneck in Traditional Approach

In typical multi-agent parallel execution workflows, the main agent performs setup tasks sequentially before spawning subagents:

Main Agent Timeline (5 Tasks):
0s   ─┐
      │ Planning (60s)
60s  ─┤
      │ Create Issue #1 (3s)  ← Sequential!
63s   │ Create Issue #2 (3s)  ← Sequential!
66s   │ Create Issue #3 (3s)  ← Sequential!
69s   │ Create Issue #4 (3s)  ← Sequential!
72s   │ Create Issue #5 (3s)  ← Sequential!
75s  ─┤
      │ Create Worktree #1 (5s)  ← Sequential!
80s   │ Create Worktree #2 (5s)  ← Sequential!
85s   │ Create Worktree #3 (5s)  ← Sequential!
90s   │ Create Worktree #4 (5s)  ← Sequential!
95s   │ Create Worktree #5 (5s)  ← Sequential!
100s ─┤
      │ Spawn 5 Agents (5s)
105s ─┴─ Subagents start working

Total Setup Time: 105 seconds Sequential Overhead: 40 seconds (issues + worktrees) Scaling: O(n) - doubles with task count

Why This Happens

Common Assumption: "I need to create all issues and worktrees before spawning subagents"

Reality: Each subagent can create its own issue and worktree independently!

Root Cause: Batching setup work in the main agent creates artificial sequencing


The Solution

Autonomous Subagent Setup

Instead of the main agent creating issues and worktrees, each subagent creates its own as the first step in its workflow.

Main Agent Timeline (5 Tasks):
0s   ─┐
      │ Planning (60s)
60s  ─┤
      │ Spawn 5 Agents (5s)
65s  ─┴─┬─────────────────────────────────┐
         │                                  │
         ▼ (All concurrent!)                ▼
      Agent 1:                           Agent 5:
      ├─ Create Issue #1 (3s)           ├─ Create Issue #5 (3s)
      ├─ Create Worktree #1 (5s)        ├─ Create Worktree #5 (5s)
      └─ Start work                     └─ Start work

73s  ─── All agents working!

Total Setup Time: 73 seconds Time Saved: 32 seconds (30% faster) Scaling: O(1) - constant regardless of task count

Core Principle

Defer NO work to sequential execution.

If subagents can do something in parallel, let them do it from the very first action.


Performance Analysis

Detailed Time Comparison

5 Tasks

Phase Sequential Parallel Saved
Planning 60s 60s 0s
Issue Creation 15s - 15s
Worktree Creation 25s - 25s
Spawn Agents 5s 5s 0s
Agents Setup (parallel) - 8s -8s
Total Setup 105s 73s 32s
Improvement - - 30%

10 Tasks

Phase Sequential Parallel Saved
Planning 60s 60s 0s
Issue Creation 30s - 30s
Worktree Creation 50s - 50s
Spawn Agents 10s 10s 0s
Agents Setup (parallel) - 8s -8s
Total Setup 150s 78s 72s
Improvement - - 48%

20 Tasks

Phase Sequential Parallel Saved
Planning 60s 60s 0s
Issue Creation 60s - 60s
Worktree Creation 100s - 100s
Spawn Agents 20s 20s 0s
Agents Setup (parallel) - 8s -8s
Total Setup 240s 88s 152s
Improvement - - 63%

Scaling Characteristics

Sequential Approach:

Setup Time = Planning + (N × IssueTime) + (N × WorktreeTime) + SpawnTime
Setup Time = 60 + (N × 3) + (N × 5) + (N × 1)
Setup Time = 60 + 9N seconds

Complexity: O(n)

Parallel Approach:

Setup Time = Planning + SpawnTime + max(IssueTime, WorktreeTime)
Setup Time = 60 + (N × 1) + 8 seconds (constant!)

Complexity: O(1) for setup, O(n) only for spawning (unavoidable)

Break-even Point: Always better (parallel is faster for N ≥ 1)


Implementation Guide

Step 1: Update Main Agent Workflow

Before (Sequential):

# Main agent creates issues sequentially
for task in tasks:
    issue = create_github_issue(task)
    task.issue_number = issue.number

# Then creates worktrees sequentially
for task in tasks:
    create_worktree(task.issue_number)

# Finally spawns agents
for task in tasks:
    spawn_subagent(task)

After (Parallel):

# Main agent spawns all subagents immediately (in parallel)
for task in tasks:
    spawn_subagent_with_setup_instructions(task)

# Each subagent creates its own issue and worktree!

Step 2: Update Subagent Instructions

Add setup phase to each subagent:

# Phase 1: Setup (Subagent does this autonomously)

# Step 1: Create GitHub issue
ISSUE_URL=$(gh issue create --title "..." --body "...")
ISSUE_NUM=$(echo "$ISSUE_URL" | grep -oE '[0-9]+$')

# Step 2: Create worktree
git worktree add "worktrees/task-$ISSUE_NUM" -b "feature/task-$ISSUE_NUM"
cd "worktrees/task-$ISSUE_NUM"

# Step 3: Setup environment
npm install  # or: uv sync, cargo build, etc.

# Phase 2: Implement feature
# ... (task-specific work)

Step 3: Spawn All Agents in Parallel

Critical: Use a single response with multiple Task tool calls

# ❌ WRONG: Sequential spawning
spawn_agent(task1)  # Wait for completion
spawn_agent(task2)  # Then spawn next
spawn_agent(task3)  # Then spawn next

# ✅ CORRECT: Parallel spawning
# Single response with 3 Task tool calls:
[
    Task(task1),
    Task(task2),
    Task(task3)
]

Architecture Diagrams

Before: Sequential Setup

┌─────────────┐
│ Main Agent  │
└──────┬──────┘
       ├─ Create Issue #1 ──┐
       ├─ Create Issue #2   │ Sequential
       ├─ Create Issue #3   │ Bottleneck
       ├─ Create Issue #4   │ (40s)
       ├─ Create Issue #5 ──┘
       ├─ Create Worktree #1 ──┐
       ├─ Create Worktree #2    │ Sequential
       ├─ Create Worktree #3    │ Bottleneck
       ├─ Create Worktree #4    │ (25s)
       ├─ Create Worktree #5 ───┘
       ├─ Spawn Agent 1 ──┐
       ├─ Spawn Agent 2   │ Parallel
       ├─ Spawn Agent 3   │ Execution
       ├─ Spawn Agent 4   │ (Finally!)
       ├─ Spawn Agent 5 ──┘
       └─ Monitor & Merge

After: Parallel Setup

┌─────────────┐
│ Main Agent  │
└──────┬──────┘
       ├─ Spawn Agent 1 ──┬──→ Agent 1:
       ├─ Spawn Agent 2   │    ├─ Create Issue #1
       ├─ Spawn Agent 3   │    ├─ Create Worktree #1
       ├─ Spawn Agent 4   │    └─ Work on task
       ├─ Spawn Agent 5 ──┘
       │                  ├──→ Agent 2:
       │                  │    ├─ Create Issue #2
       │                  │    ├─ Create Worktree #2
       │                  │    └─ Work on task
       │                  │
       │                  ├──→ Agent 3-5: (same pattern)
       │                  │
       │                  └──→ All concurrent! ⚡
       └─ Monitor & Merge

Data Flow

Traditional (Sequential):
Main Agent → Issues (seq) → Worktrees (seq) → Subagents (parallel)
   60s          15s              25s                Work

Optimized (Parallel):
Main Agent → Subagents (parallel) → Each creates own Issue + Worktree
   60s              5s                         8s (concurrent)

Benefits & Trade-offs

Benefits

1. Performance

  • 30-63% faster setup (scales with task count)
  • O(1) setup complexity (constant time)
  • No sequential bottlenecks

2. Scalability

  • Handles 100+ tasks (setup time stays ~8s)
  • Linear resource usage (no quadratic overhead)
  • Predictable performance (no surprises at scale)

3. Simplicity

  • Fewer moving parts (subagents are autonomous)
  • Easier to reason about (each agent owns its lifecycle)
  • Less coordination overhead (main agent just spawns)

4. Reliability

  • Isolated failures (one subagent failing doesn't block others)
  • Easier debugging (each agent logs its own setup)
  • Retry logic per agent (independent error handling)

Trade-offs

Potential Concerns

"What if multiple agents hit GitHub API rate limits?"

Reality: GitHub allows 5000 requests/hour for authenticated users. Creating 20 issues simultaneously (20 requests) is well within limits.

"What about git concurrency issues?"

Reality: Git worktrees are designed for concurrent use. Each git worktree add creates a separate directory with no conflicts.

"Isn't spawning agents expensive?"

Reality: Spawning agents takes ~1s per agent regardless. The optimization is in the setup work they do after spawning, which is now parallel instead of sequential.

"What if agents need sequential issue numbers?"

Reality: They don't! Each agent captures its own issue number dynamically:

ISSUE_NUM=$(echo "$ISSUE_URL" | grep -oE '[0-9]+$')


Challenges & Solutions

Challenge 1: Issue Number Coordination

Problem: Subagents need issue numbers, but they're created dynamically.

Solution:

# Each subagent creates issue and captures number
ISSUE_URL=$(gh issue create --title "..." --body "...")
ISSUE_NUM=$(echo "$ISSUE_URL" | grep -oE '[0-9]+$')

# Use $ISSUE_NUM in all subsequent commands
git worktree add "worktrees/task-$ISSUE_NUM" -b "feature/task-$ISSUE_NUM"

Why it works: Each subagent gets its own unique issue number from GitHub's auto-increment.

Challenge 2: Worktree Naming Conflicts

Problem: What if two subagents try to create the same worktree path?

Solution: Use issue numbers (unique) in worktree names:

# ✅ Unique (uses GitHub issue number)
git worktree add "worktrees/task-$ISSUE_NUM"

# ❌ Could conflict (static name)
git worktree add "worktrees/auth-feature"

Why it works: GitHub issue numbers are globally unique per repository.

Challenge 3: Git Concurrency

Problem: Can multiple git worktree add commands run simultaneously?

Solution: Yes! Git worktrees are designed for this: - Each worktree is a separate directory - Each uses a different branch name - Git's internal locking prevents corruption - Tested with 20+ concurrent worktree creations

Evidence:

# Create 10 worktrees concurrently
for i in {1..10}; do
  git worktree add "worktrees/test-$i" -b "branch-$i" &
done
wait

# All succeed with no conflicts! ✅

Challenge 4: GitHub API Rate Limiting

Problem: Creating 10+ issues simultaneously might hit rate limits.

Solution: - GitHub API allows 5000 requests/hour for authenticated users - Creating 100 issues = 100 requests (well within limit) - If needed, add small stagger (100ms delay between spawns)

Monitoring:

# Check rate limit status
gh api rate_limit

# Example output:
# limit: 5000
# remaining: 4987
# reset: 2025-10-21T15:00:00Z

Challenge 5: Error Handling

Problem: What if a subagent fails during setup?

Solution: Each subagent reports errors independently:

# Subagent error handling
if ! ISSUE_URL=$(gh issue create ...); then
    echo "ERROR: Failed to create issue"
    # Report to main agent
    exit 1
fi

# Main agent sees failure and handles it
# Other subagents continue unaffected

Why it works: Failures are isolated - one subagent failing doesn't block others.


Best Practices

1. Always Spawn in Parallel

✅ DO:

# Single response with multiple Task calls
[Task(task1), Task(task2), Task(task3)]

❌ DON'T:

# Multiple responses (sequential)
spawn_agent(task1)
# ... wait ...
spawn_agent(task2)
# ... wait ...
spawn_agent(task3)

2. Make Subagents Fully Autonomous

✅ DO:

Subagent instructions:
1. Create your own GitHub issue
2. Create your own worktree
3. Setup your own environment
4. Implement the feature
5. Test and push
6. Report completion

❌ DON'T:

Subagent instructions:
1. Wait for main agent to create issue
2. Wait for main agent to create worktree
3. Start work

3. Use Unique Identifiers

✅ DO:

# Issue number (unique)
git worktree add "worktrees/task-$ISSUE_NUM"

# Timestamp (unique)
git worktree add "worktrees/auth-$(date +%s)"

❌ DON'T:

# Static names (can conflict)
git worktree add "worktrees/auth-feature"

4. Handle Errors Independently

✅ DO:

# Each subagent retries its own failures
if ! gh issue create ...; then
    sleep 1
    gh issue create ...  # Retry
fi

❌ DON'T:

# Main agent retries for all subagents (sequential!)
for agent in failed_agents:
    retry_setup(agent)  # Sequential retry

5. Monitor, Don't Micromanage

✅ DO:

Main agent:
- Spawn all subagents
- Monitor for completion
- Respond to questions
- Merge when ready

❌ DON'T:

Main agent:
- Spawn subagent 1
- Wait for issue creation
- Wait for worktree creation
- Check if environment setup succeeded
- ... (micromanaging each step)


Future Optimizations

1. Parallel Planning

Current: Planning is sequential (60s)

Optimization: Use an agent to analyze tasks in parallel with user interaction

Potential Savings: 30-40s

2. Predictive Spawning

Current: Spawn agents after plan is complete

Optimization: Start spawning agents while plan is being finalized

Potential Savings: 5-10s

3. Batch Operations

Current: Each subagent makes individual GitHub API calls

Optimization: Use GraphQL batching to create multiple issues in one request

Potential Savings: 1-2s (minimal, but cleaner)

4. Worktree Pooling

Current: Create worktrees on-demand

Optimization: Pre-create a pool of worktrees ready to use

Potential Savings: 3-5s

Trade-off: More complex, requires cleanup


Metrics & Measurement

How to Measure Performance

Add timing to your workflow:

# Start timer
START_TIME=$(date +%s)

# ... execute workflow ...

# End timer
END_TIME=$(date +%s)
ELAPSED=$((END_TIME - START_TIME))

echo "Total time: ${ELAPSED}s"

Key Metrics to Track

  1. Setup Time: Time from plan completion to agents starting work
  2. Parallel Efficiency: Actual speedup vs theoretical maximum
  3. Error Rate: Percentage of subagents that fail during setup
  4. Resource Usage: CPU/memory during parallel operations
  5. API Usage: GitHub API requests consumed

Expected Results

Metric Sequential Parallel Target
Setup Time (5 tasks) 105s 73s <75s
Setup Time (10 tasks) 150s 78s <80s
Parallel Efficiency 0% (seq) 85-95% >80%
Error Rate 1-2% 1-2% <5%
API Usage (10 tasks) 10 req 10 req <5000/hour

References

  • Actor Model: Each subagent is an autonomous actor
  • MapReduce: Parallel execution with independent workers
  • Fork-Join: Spawn many workers, join results at end
  • Event-Driven Architecture: Subagents react to completion events

Similar Optimizations

  1. E2E Testing Parallelization: Same pattern applied to test execution
  2. CI/CD Pipeline Parallelization: Jobs run in parallel, not sequential
  3. Microservices: Independent services, not monolithic orchestration
  4. Kubernetes Deployments: Pods start concurrently, not sequentially

Further Reading


Appendix: Complete Example

Before Optimization

def execute_parallel_workflow(tasks):
    # Phase 1: Planning (60s)
    plan = create_plan(tasks)

    # Phase 2: Create issues sequentially (15s for 5 tasks)
    for task in plan.tasks:
        task.issue = create_github_issue(task)  # 3s each

    # Phase 3: Create worktrees sequentially (25s for 5 tasks)
    for task in plan.tasks:
        create_worktree(task.issue.number)  # 5s each

    # Phase 4: Spawn agents (5s for 5 tasks)
    for task in plan.tasks:
        spawn_agent(task)  # 1s each

    # Total: 105s setup time

After Optimization

def execute_parallel_workflow(tasks):
    # Phase 1: Planning (60s)
    plan = create_plan(tasks)

    # Phase 2: Spawn all agents in parallel (5s for 5 tasks)
    agents = [
        spawn_agent_with_autonomous_setup(task)
        for task in plan.tasks
    ]

    # Each agent concurrently:
    # - Creates its own issue (3s)
    # - Creates its own worktree (5s)
    # - Starts working (parallel)

    # Total: 73s setup time (32s saved!)

Conclusion

The Parallel Setup Pattern is a simple but powerful optimization:

Key Insight: If subagents can do something in parallel, let them do it from the start.

Implementation: Delegate setup (issues, worktrees) to subagents instead of main agent.

Results: - 30-63% faster setup - O(1) scaling - Simpler coordination - True parallelism

Adoption: Implemented in Contextune v1.0+ parallel execution commands.


Questions or feedback? Open an issue on GitHub or discuss in the Contextune community.

Version: 1.0 Last Updated: 2025-10-21 License: MIT