Loading patterns...
If this never updates, open the site over HTTP(s) so data can load.
An evidence-first failure catalog for GraphRAG and graph-AI claims that sound strong, but collapse under inspection.
Common failure modes in graph extraction and GraphRAG systems. Each category is meant to support evidence-backed critique, not vibes.
If this never updates, open the site over HTTP(s) so data can load.
One highlighted critique entry, rotated by calendar day (UTC).
Loading...
Curated entries with source-backed claims, categories, and takeaways.
Loading...
What better graph systems usually do instead, and how this catalog keeps critique disciplined.
Parse documents into logical units (sections, tables, lists) before asking an LLM to assert relations. Text locality matters — fake locality produces fake edges.
Canonical IDs, blocking, pairwise scoring, and human review for long tails. Lowercasing strings is not a merge strategy.
Version your ontology, constrain predicates, and test extraction against gold sets. If the model drifts types every deploy, you don't have a graph.
Edge-level precision/recall, constraint violations, and retrieval quality beats "the graph looks big." Big graphs full of slop are just high-entropy compost.
Store source spans, document IDs, and extraction version. When someone asks why an edge exists, the answer should not be vibes.
Rules and dictionaries for stable relations; LLMs for fuzzy bits — with calibration and abstention when confidence is low.