The structure people agree to in real time but rarely write down. Paste a meeting transcript — sales discovery, project status, vendor evaluation. One structured-output call extracts attendees, decisions, action items with owner and priority, open questions, risks, and the topics actually discussed. Sales reps use this to populate the CRM after every call. Project managers use it to populate the status report. Same structure, two business outcomes.
Browser
└─→ POST /api/lab/chat
- system: extraction prompt with the schema below
(versioned: meeting.v1)
- user: the transcript, verbatim
- temperature: 0.2 (low — extraction should be stable)
- max_tokens: 2400
← single response, parsed as JSON: {
meeting_type, summary, sentiment, duration_minutes,
attendees[], decisions[], action_items[],
open_questions[], risks[], topics[], deal_signal
}
← rendered as a two-column structured grid with
owner badges, priority chips, and topic time-bars
One LLM call. The model reads the transcript end-to-end and
emits the same structure a meeting recorder would write up
after the fact — but in seconds. Action items are tied to
named owners, decisions get context, risks get severity.
deal_signal only populates for sales-shaped
meetings; the model decides whether the field applies.
The most expensive thing in B2B sales is not the deal you lose. It's the deal you forget to follow up on. Reps walk out of discovery calls with a head full of context and no time to write it down — they get to the CRM update three days later, the next-step ambiguous, the buyer's concerns half-remembered.
This demo collapses that. Drop a transcript in (Zoom, Gong, Otter all export plain text), get back a populated structure: who was in the room, what was decided, what's owed and to whom, what's still open, what was flagged as a risk. The CRM update writes itself; the rep just confirms it.
Same shape works for project status meetings — decisions, action items, open questions and risks are the project status report. And for vendor evaluation calls, where you need an audit trail of what each vendor committed to.
Honest caveat: extracted action items are only as good as how clearly they were stated. If two people half-agree to maybe think about something, the model has to guess whether that's an action item or a parking-lot item. Production deployments should let the user edit before saving — and the rep stays accountable for the final CRM record.
Extract a transcript to see telemetry.