The Lab's showpiece. Describe an AI use case in plain English, and a single structured-output LLM call returns a complete reference architecture: model approach, data layer, integration, infrastructure, governance posture (with EU AI Act risk tier), KPIs, and a generated Mermaid diagram. AI designing AI — the same reasoning I apply on real engagements, distilled into a versioned prompt.
Generate an architecture to see telemetry.
Browser
└─→ POST /api/lab/chat
- system: structured-output prompt with the full schema below,
versioned (arch.v1) and exposed in the page.
- user: your use case description
- temperature: 0.4 (low, since we want stable structure)
- max_tokens: 2048 (architecture spec is ~800-1500 tokens)
← single non-streamed response, parsed as JSON
← rendered as six section cards + Mermaid diagram
One LLM call. Mermaid.js renders the generated diagram client-side from a CDN script (lazy-loaded on first generate). No backend orchestration, no agent loop — this is the simplest architecture in the lab and the most opinionated, because the system prompt encodes every architectural decision rule it applies.
Loading schema…
Honest caveat: the same model both designs and grades. Production-grade architecture work uses a human architect, a knowledge base of past engagements, and several review passes. This demo is one architect's opinions distilled into a prompt — useful as a starting point, not as a substitute for the review.