Sprint Simulator

A Jira-shaped Agile simulator. Pick a preset story or write your own, then walk it through the four AI-assisted stages of an SDLC: refinement (acceptance criteria, points, edge cases), development (technical approach, tasks, pseudo-code, API endpoints), testing (unit / integration / e2e cases with realistic pass/fail simulation), and deployment (an animated CI/CD log). Close the sprint to get an AI-generated retrospective with velocity, themes, and prioritized action items. The whole Agile-with-AI stack in one demo.

Sprint 24 · TaskFlow
Day 1 of 14
Capacity 0 / 21
Velocity 0 pts
Stories 0 done · 0 total
// Add a user story (or pick a preset below to load 4 starter stories)
Type a story, or load 4 preset stories to explore the flow.
Move stories through the columns. Once at least 1 story is Done, run the retrospective.
Architecture — the four AI calls behind the simulator
Per-story stages → independent LLM calls

[1] story-refine.v1   (temp 0.2, max_tokens 1500)
    raw user prose → {title, user_story, description,
    acceptance_criteria[], story_points, complexity,
    edge_cases[], dependencies[], NFRs[], estimated_hours}

[2] story-develop.v1  (temp 0.3, max_tokens 2000)
    refined story → {approach, tech_stack, tasks[],
    components[], pseudocode, data_model_changes[],
    api_endpoints[], risks[]}

[3] story-test.v1     (temp 0.4, max_tokens 2200)
    refined story + dev plan → {test_strategy, tests[]
    with pass/fail simulation, summary{coverage_pct,
    blocking_issues}, ready_to_deploy, remediation[]}

[4] sprint-retro.v1   (temp 0.4, max_tokens 1800)
    completed sprint → {sprint_summary, velocity,
    planned_vs_delivered_pct, themes[], what_went_well[],
    what_to_improve[], action_items[], next_sprint_rec}

Each stage is its own structured-output schema. The "deployment" is an animated log, not a real CI pipeline. The test results are AI-simulated — the system prompt explicitly asks for a realistic ~10-20% failure rate so the demo doesn't always green-light. Failed stories can be re-developed and re-tested, mirroring real Agile rework loops.

Why this matters for engineering leaders

Every Agile team I've worked with spends most of the sprint on the same mechanical work: refining vague stories, breaking them into tasks, writing test cases, drafting deploy runbooks, summarizing the sprint at close. Each of those is now a tractable AI assist — not "AI replaces the engineer" but "AI fills in the structured artifacts so the engineer focuses on the decisions that actually require judgement".

This demo collapses the whole loop into one screen so an engineering leader can see, in five minutes, where AI fits into the current SDLC: ahead of every stage, never replacing the human review at the interface. The stories you walk through here are the same kinds of stories your team is shipping this sprint. Sub in your own and the simulator generates the SAME structured artifacts — refined story, tech plan, test cases, retro — any process tool would expect.

Honest caveat: this is a SIMULATION. No code is built. No tests actually execute. The pass/fail outcomes are the model's best guess at what would realistically happen. The point is the artifact shape, the integration pattern, and the human-in-the-loop flow — not that AI literally writes and ships your code.

Telemetry — last AI call

Run any stage to see telemetry.