41 repositories. 277 specialized agents. Masterplan-driven development with mandatory security review at every step.
A meta-repository orchestrating 41 repos without containing them in its git history. Each repo maintains its own history, agents, and stack-specific conventions.
workspace/ ├── CLAUDE.md # Global AI instructions ├── .claude/agents/ # 5 workspace orchestrators ├── catalog/repos.yml # Registry of all 41 repos ├── masterplans/YYYY/MM/ # Feature design documents ├── mockups/YYYY/MM/ # Visual mockups ├── docs/security/ # 7 security reference docs └── repos/ # 41 repos (gitignored) ├── frontend-monorepo/ # Next.js 14 apps ├── main-api/ # FastAPI backend ├── auth-service/ # JWT authentication ├── ops-platform/ # Internal tools ├── ai-agent/ # AI agent system └── ... 36 more
Contains only orchestration code — instructions, agents, docs, and scripts. The 41 product repos live inside repos/ but are gitignored.
Every repo has its own CLAUDE.md with stack-specific conventions. Workspace-level rules are inherited by all repos automatically.
Claude navigates between repos, understands connections (frontend → API → workers → auth), and makes coordinated changes in a single session.
frontend-monorepo (frontend) ├── calls → main-api (main API) │ ├── uses → auth-service (authentication) │ ├── triggers → api-worker (async jobs) │ ├── calls → geo-api (geo services) │ └── calls → email-service (transactional) ├── calls → websocket-service (real-time) └── calls → storefront (e-commerce)
277 specialized AI agents in three tiers. Workspace orchestrators coordinate cross-repo work, repo agents handle domain tasks, built-in agents are fallback only.
Cross-repo agents coordinating complex multi-repository tasks
Developers, reviewers, architects, debuggers — specialized per repo
Explore, Plan, general-purpose — only used as fallback
Every feature follows a structured path: planning with parallel research, user approval, then coordinated implementation with mandatory security reviews at every step.
The architect launches parallel sub-agents for code research, industry best practices, and visual mockups.
masterplan-architect ├── codebase-researcher # parallel, per repo ├── best-practices # web research └── mockup-builder # after text done ▼ masterplan.md + mockup.html
After approval, the executor discovers each repo's agents and coordinates dev/review cycles. Every edit triggers incremental security review.
═══ USER APPROVAL ═══ ▼ masterplan-executor ├── frontend-monorepo/ │ ├── dev-agent ↔ review-agent │ └── commit → merge └── main-api/ ├── dev-agent ↔ review-agent └── commit → merge
Our AI agent isn't just a coding tool — it's a long-lived daemon with Slack presence, email access, scheduled tasks, and human-in-the-loop control.
Receives channel messages and DMs. Thread-aware — each thread gets its own persistent session with full context.
OAuth2 Gmail access per user. Polls for new emails, creates threaded conversations, drafts and sends responses.
HTTP API for programmatic access, built-in web chat interface, and Agent-to-Agent protocol for inter-service communication.
Cron-style job scheduling. The agent wakes itself every 5 minutes to check for pending tasks, emails, and approvals.
Creates pending interactions for decisions that need human input. Pauses autonomously and resumes when the team responds.
Up to 25 concurrent Claude CLI subprocesses. Each worker is OS-isolated with session persistence and native MCP tool support.
Gateway (:8000) → Message routing, auth, channel adapters Workers (:8001) → Claude CLI subprocess pool (25 concurrent) Scheduler (:8002) → Cron jobs, heartbeat loop, pending interactions Admin UI (:3000) → Real-time stats, conversation browser, controls # Channels supported: Slack → HMAC-SHA256 verified, thread-aware sessions Email → Gmail OAuth2, per-user consent, thread binding HTTP → Bearer token auth, streaming responses A2A → Agent-to-Agent protocol, registry discovery Web Chat → Built-in SPA with SSE streaming
The AI remembers across sessions. A three-tier storage system captures decisions, patterns, and full conversation transcripts for any team member to recall.
FTS5 full-text search over patterns, decisions, gotchas, and solutions. Auto-tagged by repo and developer. BM25-ranked recall.
Structured metadata per coding session: repos touched, files changed, commits, decisions, blockers, and frustration scoring (0-10).
Full JSONL conversation transcripts archived to S3. Every message, tool call, and result preserved for context restoration.
# During development devlog_start(working_directory) # Begin session ▼ devlog_update(session_id, repos: ["frontend-monorepo"], commits: [{hash: "abc12", message: "feat: add filter"}], decisions: "Chose Redis for caching", learnings: ["Service X requires header Y"], # auto → Brain frustration: 2 # AI-scored from tone ) ▼ # Transcript uploaded to S3 after each update # AI generates summary + frustration score from transcript # Later, any team member can: knowledge_recall("notification patterns") # Search Brain devlog_get(session_id) # Resume with full context devlog_summary(days=7) # Team activity overview
AI-augmented daily workflows: task checklists delivered to Slack, standup prompts, sprint goal tracking, and a personal AI coach for every team member.
2-week sprints with auto-generation. 3 personal macro objectives per sprint. Kanban board with drag-and-drop. Progress tracked per objective.
Personal productivity coaching via Slack DM. Daily check-ins, OKR tracking, sprint goal recommendations. Configurable work hours and language.
Quarterly objectives with key results. Cascade view linking company goals → team objectives → sprint goals. Health scoring and check-in history.
Model Context Protocol connects Claude to external services as native tool calls. Three MCP servers expose 51+ tools for knowledge, project management, and cloud services.
Every agent is a markdown file with YAML frontmatter. Here are two real examples from our production repos (anonymized).
--- model: sonnet description: | Implements new API endpoints following established patterns. --- # API Feature Developer You are a senior backend developer specializing in this FastAPI service. ## Architecture src/ ├── routes/ # HTTP endpoints ├── services/ # Business logic ├── models/ # Pydantic schemas └── core/ # Auth, DB, errors ## Conventions - All endpoints use Pydantic models - Auth via Depends(get_current_user) - Services injected via FastAPI DI - Errors use typed exception classes ## Quality Checklist - [ ] Pydantic models for req/res - [ ] Auth middleware on protected routes - [ ] Input validation on all user data - [ ] No hardcoded secrets - [ ] Error responses don't leak internals - [ ] Tests cover happy + error paths - [ ] All paths relative to repo root
--- model: sonnet description: | Reviews code for security, quality, and pattern compliance. --- # API Code Reviewer Primary focus: security, then correctness, then style. ## Security Scan (BLOCKING) SQL Injection → string concat in queries Auth Bypass → missing auth dependency IDOR → no ownership check Secrets → keys/tokens in code Error Leak → stack traces exposed ## Output Format ### Security Issues (BLOCKING) - [CRITICAL] file.py:42 SQL injection in user query Fix: use parameterized query ### Quality Issues (BLOCKING) - [HIGH] file.py:88 Unhandled exception in API call Fix: add try/except with logging ### Pattern Issues (WARNING) - [LOW] file.py:15 Function naming doesn't match repo style ### Passed Checks - Auth enforcement verified - Input validation present - No hardcoded secrets found
Every change passes automated security review. Seven reference documents cover OWASP Top 10 across the full stack, including agentic AI security.
Every PR checked for OWASP Top 10 — injection, XSS, auth bypass, SSRF
API keys, passwords, tokens never in source code or config files
User input never concatenated into SQL — prepared statements and ORMs
Every endpoint validates with Pydantic, Zod, Joi, or DRF serializers
AI never executes commands from external content or skips confirmations
Python/FastAPI, Next.js, Django, Node.js, AWS, agentic AI guidelines