CRITICAL FIX for beginners: User config model changes now work correctly
Issues Fixed:
- rag-mini.py synthesis mode ignored config completely (used hardcoded models)
- LLMSynthesizer fallback ignored config preferences
- Users changing model in config saw no effect in synthesis mode
Changes:
- rag-mini.py now loads config and passes synthesis_model to LLMSynthesizer
- LLMSynthesizer _select_best_model() respects config model_rankings for fallback
- All modes (synthesis and explore) now properly use config settings
Tested: Model config changes now work correctly in both synthesis and explore modes
✨ Features:
- GitHub releases integration with version checking
- TUI update notifications with user-friendly interface
- CLI update commands (check-update, update)
- Discrete notifications that don't interrupt workflow
- Legacy user detection for older versions
- Safe update process with backup and rollback
- Progress bars and user confirmation
- Configurable update preferences
🔧 Technical:
- UpdateChecker class with GitHub API integration
- UpdateConfig for user preferences
- Graceful fallbacks when network unavailable
- Auto-restart after successful updates
- Works with both TUI and CLI interfaces
🎯 User Experience:
- TUI: Shows update banner on startup if available
- CLI: Discrete one-line notice for regular commands
- Commands: 'rag-mini check-update' and 'rag-mini update'
- Non-intrusive design respects user workflow
This provides seamless updates for the critical improvements
we've been implementing while giving users full control.
This comprehensive update enhances user experience with several key improvements:
## Enhanced Streaming & Thinking Display
- Implement real-time streaming with gray thinking tokens that collapse after completion
- Fix thinking token redisplay bug with proper content filtering
- Add clear "AI Response:" headers to separate thinking from responses
- Enable streaming by default for better user engagement
- Keep thinking visible for exploration, collapse only for suggested questions
## Natural Conversation Responses
- Convert clunky JSON exploration responses to natural, conversational format
- Improve exploration prompts for friendly, colleague-style interactions
- Update summary generation with better context handling
- Eliminate double response display issues
## Model Reference Updates
- Remove all llama3.2 references in favor of qwen3 models
- Fix non-existent qwen3:3b references, replace with proper model names
- Update model rankings to prioritize working qwen models across all components
- Ensure consistent model recommendations in docs and examples
## Cross-Platform Icon Integration
- Add desktop icon setup to Linux installer with .desktop entry
- Add Windows shortcuts for desktop and Start Menu integration
- Improve installer user experience with visual branding
## Configuration & Navigation Fixes
- Fix "0" option in configuration menu to properly go back
- Improve configuration menu user-friendliness
- Update troubleshooting guides with correct model suggestions
These changes significantly improve the beginner experience while maintaining
technical accuracy and system reliability.
Major fixes:
- Fix model selection to prioritize qwen3:1.7b instead of qwen3:4b for testing
- Correct context length from 80,000 to 32,000 tokens (proper Qwen3 limit)
- Implement content-preserving safeguards instead of dropping responses
- Fix all test imports from claude_rag to mini_rag module naming
- Add virtual environment warnings to all test entry points
- Fix TUI EOF crash handling with proper error handling
- Remove warmup delays that were causing startup lag and unwanted model calls
- Fix command mappings between bash wrapper and Python script
- Update documentation to reflect qwen3:1.7b as primary recommendation
- Improve TUI box alignment and formatting
- Make language generic for any documents, not just codebases
- Add proper folder names in user feedback instead of generic terms
Technical improvements:
- Unified model rankings across all components
- Better error handling for missing dependencies
- Comprehensive testing and validation of all fixes
- All tests now pass and system is deployment-ready
All major crashes and deployment issues resolved.
- Changed primary model recommendation from qwen3:1.7b to qwen3:4b
- Added Q8 quantization info in technical docs for production users
- Fixed method name error: get_embedding_info() -> get_status()
- Updated all error messages and test files with new recommendations
- Maintained beginner-friendly options (1.7b still very good, 0.6b surprisingly good)
- Added explanation of why small models work well with RAG context
- Comprehensive testing completed - system ready for clean release
Complete rebrand to eliminate any Claude/Anthropic references:
Directory Changes:
- claude_rag/ → mini_rag/ (preserving git history)
Content Changes:
- Replaced 930+ Claude references across 40+ files
- Updated all imports: from claude_rag → from mini_rag
- Updated all file paths: .claude-rag → .mini-rag
- Updated documentation and comments
- Updated configuration files and examples
Testing Changes:
- All tests updated to use mini_rag imports
- Integration tests verify new module structure
This ensures complete independence from Claude/Anthropic
branding while maintaining all functionality and git history.
- Add user confirmation before stopping models for optimal mode switching
- Clean separation: synthesis mode never uses thinking, exploration always does
- Add intelligent restart detection based on response quality heuristics
- Include helpful guidance messages suggesting exploration mode for deep analysis
- Default synthesis mode to no-thinking for consistent fast responses
- Handle graceful fallbacks when model stop fails or user declines
- Provide clear explanations for why model restart improves thinking quality
- Create separate explore mode with thinking enabled for debugging/learning
- Add lazy loading with LLM warmup using 'testing, just say "hi" <no_think>'
- Implement context-aware conversation memory across questions
- Add interactive CLI with help, summary, and session management
- Enable Qwen3 thinking mode toggle for experimentation
- Support multi-turn conversations for better debugging workflow
- Clean separation between fast synthesis and deep exploration modes
🔧 Integration Updates
- Added --synthesize flag to main rag-mini CLI
- Updated README with synthesis examples and 10 result default
- Enhanced demo script with 8 complete results (was cutting off at 5)
- Updated rag-tui default from 5 to 10 results
- Updated rag-mini-enhanced script defaults
📈 User Experience Improvements
- All components now consistently default to 10 results
- Demo shows complete 8-result workflow with multi-line previews
- Documentation reflects new AI analysis capabilities
- Seamless integration preserves existing workflows
Users get more comprehensive results by default and can optionally
add intelligent AI analysis with a simple --synthesize flag!