Enterprise knowledge isn't flat text — it's relational. Pick any source from the lab corpus and one structured-output call extracts the entities (concepts, frameworks, principles, processes, roles, metrics) and the relations that bind them. A custom Canvas force simulation lays it all out. Drag the nodes, hover for details, click to highlight a neighborhood. This is the bridge from RAG's flat retrieval to graph-augmented retrieval.
Browser ├─ fetch /lab/assets/sources/.txt (the same corpus │ sources used by the │ other Knowledge demos) │ └─→ POST /api/lab/chat - system: graph-extraction prompt with the schema below (versioned: graph.v1) - user: the source text - temperature: 0.2 - max_tokens: 2048 ← single response, parsed as JSON: { entities[], relations[] } ← orphan relations dropped (refs to non-existent entity ids) Force simulation (pure JS, no library): - repulsion ∝ 1/r² between every node pair - spring force toward target distance for each relation - centering force toward canvas center - damping 0.85 per frame - 60fps via requestAnimationFrame
One LLM call. The graph layout is a ~30-line custom force simulation — no D3, no vis-network, no other library. The simulation runs entirely in the browser; the Canvas re-renders on every frame with edges drawn first, then nodes, then labels.
Loading schema…
RAG Explorer answers what does the corpus say about a question? Knowledge Graph Builder answers what is the structure of the corpus itself? Both are useful; production RAG systems often graduate into graph-augmented retrieval, where the graph adds traversal-based relevance ranking and constraint-aware reasoning on top of dense retrieval.
Honest caveat: production knowledge graphs are usually built incrementally, with named ontology classes and curated relation types, often by humans + extraction models in a feedback loop. This demo extracts a graph from a single document in one shot — useful for exploration, not for shipping.
Extract a graph to see telemetry.