Blindly Trusting Retrieved Documents
Traditional RAG has a fundamental flaw — it always trusts whatever documents the retriever returns. If retrieval returns wrong or irrelevant documents, a wrong answer is guaranteed. There is no mechanism to catch or correct this failure.
Evaluate, Then Decide What to Do
CRAG adds a Retrieval Evaluator that judges whether retrieved documents are actually relevant before using them. Based on that judgment, it takes one of three different actions.
Cleaning What Is Retrieved
Even when documents pass the relevance check, they may contain mixed content. Knowledge Refinement cleans this up in three steps.
Fallback to Live Web Search
When retrieved documents are insufficient, CRAG queries the web using services like Tavily. The query is first rewritten into a form optimized for search engines — producing better results than the raw user question.
The Graph Architecture
Corrective RAG transforms traditional RAG from a trust-everything system into a self-correcting one. The retrieval evaluator judges relevance, knowledge refinement cleans the context, and web search provides a reliable fallback. The result is a system that never blindly trusts bad retrieval — and almost never produces hallucinated answers.