Some text some message..
Back 📊 Advanced RAG Retrieval Techniques — Interview-Ready Comparison Table 03 Jan, 2026
TechniqueCore Idea (1-liner)Problem It SolvesWhen It Acts in RAGStrengthsLimitationsBest Use Cases
HyDE (Hypothetical Document Embeddings)Embed a hypothetical answer instead of the queryPoor / vague query embeddingsBefore retrievalDramatically improves recall for vague queries; aligns with document styleDepends on LLM quality; extra costResearch RAG, medical/legal QA, exploratory questions
Window SearchRetrieve neighboring chunks around a hitLoss of context due to chunkingAfter retrieval (context expansion)Preserves narrative flow; improves coherenceIncreases token usagePDFs, books, policies, manuals
Self-Query RetrieverLLM converts natural language into semantic query + filtersHidden user constraints (time, level, type)Before retrieval (query planning)Powerful structured + unstructured search; enterprise-friendlyNeeds clean metadata; schema-dependentProduct search, course catalogs, enterprise docs
Contextual Compression RetrievalShrinks documents to only query-relevant partsToken waste & noisy contextAfter retrieval, before generationSaves tokens; reduces hallucination; improves precisionRisk of over-compression; added computeLong docs, cost-sensitive RAG, strict token limits
RAG Fusion (Multi-Query Retrieval)Generate multiple query variants and merge resultsNarrow recall due to wording biasDuring retrieval (multi-pass)High recall; uncovers blind spotsHigher latency & cost; needs reranking

Open-domain QA, research, knowledge-heavy RAG