📚 DOCUMENTATION - docs/QUERY_EXPANSION.md: Complete beginner guide with examples and troubleshooting - Updated config.yaml with proper LLM settings and comments - Clear explanations of when features are enabled/disabled 🧪 NEW TESTING INFRASTRUCTURE - test_ollama_integration.py: 6 comprehensive tests with helpful error messages - test_smart_ranking.py: 6 tests verifying ranking quality improvements - troubleshoot.py: Interactive tool for diagnosing setup issues - Enhanced system validation with new features coverage ⚙️ SMART DEFAULTS - Query expansion disabled by default (CLI speed) - TUI enables expansion automatically (exploration mode) - Clear user feedback about which features are active - Graceful degradation when Ollama unavailable 🎯 BEGINNER-FRIENDLY APPROACH - Tests explain what they're checking and why - Clear solutions provided for common problems - Educational output showing system status - Offline testing with gentle mocking Run 'python3 tests/troubleshoot.py' to verify your setup\!
2.6 KiB
2.6 KiB
Query Expansion Guide
What Is Query Expansion?
Query expansion automatically adds related terms to your search to find more relevant results.
Example:
- You search:
"authentication" - System expands to:
"authentication login user verification credentials security" - Result: 2-3x more relevant matches!
How It Works
graph LR
A[User Query] --> B[LLM Expands]
B --> C[Enhanced Search]
C --> D[Better Results]
style A fill:#e1f5fe
style D fill:#e8f5e8
- Your query goes to a small, fast LLM (like qwen3:1.7b)
- LLM adds related terms that people might use when writing about the topic
- Both semantic and keyword search use the expanded query
- You get much better results without changing anything
When Is It Enabled?
- ❌ CLI commands: Disabled by default (for speed)
- ✅ TUI interface: Auto-enabled (when you have time to explore)
- ⚙️ Configurable: Can be enabled/disabled in config.yaml
Configuration
Edit config.yaml:
# Search behavior settings
search:
expand_queries: false # Enable automatic query expansion
# LLM expansion settings
llm:
max_expansion_terms: 8 # How many terms to add
expansion_model: auto # Which model to use
ollama_host: localhost:11434 # Ollama server
Performance
- Speed: ~100ms on most systems (depends on your hardware)
- Caching: Repeated queries are instant
- Model Selection: Automatically uses fastest available model
Examples
Code Search:
"error handling" → "error handling exception try catch fault tolerance recovery"
Documentation Search:
"installation" → "installation setup install deploy configuration getting started"
Any Content:
"budget planning" → "budget planning financial forecast cost analysis spending plan"
Troubleshooting
Query expansion not working?
- Check if Ollama is running:
curl http://localhost:11434/api/tags - Verify you have a model installed:
ollama list - Check logs with
--verboseflag
Too slow?
- Disable in config.yaml:
expand_queries: false - Or use faster model:
expansion_model: "qwen3:0.6b"
Poor expansions?
- Try different model:
expansion_model: "qwen3:1.7b" - Reduce terms:
max_expansion_terms: 5
Technical Details
The QueryExpander class:
- Uses temperature 0.1 for consistent results
- Limits expansions to prevent very long queries
- Handles model selection automatically
- Includes smart caching to avoid repeated calls
Perfect for beginners because it "just works" - enable it when you want better results, disable when you want maximum speed.