Configuration Guide¶
BMLibrarian uses a comprehensive configuration system that manages agent settings, database connections, API parameters, and user preferences.
Configuration Architecture¶
Configuration Hierarchy¶
BMLibrarian follows a hierarchical configuration approach:
- Default Configuration - Built-in defaults for all settings
- User Configuration File - Primary configuration at
~/.bmlibrarian/config.json - Legacy Configuration - Fallback configuration in current directory
- Command Line Arguments - Runtime overrides for specific operations
- Environment Variables - System-level configuration for sensitive data
Configuration File Locations¶
Primary Location (OS-agnostic):
Platform-Specific Paths:
| Platform | Path |
|---|---|
| Windows | C:\Users\[username]\.bmlibrarian\config.json |
| macOS | /Users/[username]/.bmlibrarian/config.json |
| Linux | /home/[username]/.bmlibrarian/config.json |
Initial Setup¶
Quick Setup¶
# 1. Install dependencies
uv sync
# 2. Copy environment template
cp .env.example .env
# 3. Edit environment variables
nano .env
# 4. Launch configuration GUI (easiest method)
uv run python bmlibrarian_config_gui.py
Environment Variables¶
Create a .env file in the project root:
# Database Configuration
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=knowledgebase
POSTGRES_USER=your_username
POSTGRES_PASSWORD=your_password
# Optional: PDF file storage
PDF_BASE_DIR=~/knowledgebase/pdf
# Optional: Ollama server (if not localhost:11434)
OLLAMA_BASE_URL=http://localhost:11434
Configuration Structure¶
Complete Configuration Schema¶
{
"general": {
"ollama_base_url": "http://localhost:11434",
"database_params": {
"host": "localhost",
"port": 5432,
"database": "knowledgebase",
"user": "username",
"password": "password"
},
"cli_defaults": {
"max_results": 100,
"score_threshold": 2.5,
"max_citations": 30,
"timeout": 120,
"show_progress": true,
"auto_mode": false,
"comprehensive_counterfactual": false
}
},
"agents": {
"query_agent": {
"model": "gpt-oss:20b",
"temperature": 0.1,
"top_p": 0.9,
"max_tokens": 2048
},
"scoring_agent": {
"model": "medgemma4B_it_q8:latest",
"temperature": 0.0,
"top_p": 0.8,
"max_tokens": 512
},
"citation_agent": {
"model": "gpt-oss:20b",
"temperature": 0.2,
"top_p": 0.9,
"max_tokens": 2048
},
"reporting_agent": {
"model": "gpt-oss:20b",
"temperature": 0.3,
"top_p": 0.9,
"max_tokens": 8192
}
}
}
Agent Configuration¶
Each agent supports common parameters plus agent-specific settings.
Common Parameters¶
| Parameter | Range | Description |
|---|---|---|
| model | - | Ollama model name |
| temperature | 0.0-2.0 | Response randomness |
| top_p | 0.0-1.0 | Nucleus sampling parameter |
| max_tokens | - | Maximum response length |
Temperature Guidelines¶
| Task Type | Temperature | Examples |
|---|---|---|
| Deterministic | 0.0-0.2 | Query generation, scoring |
| Balanced | 0.2-0.5 | Report writing, editing |
| Creative | 0.5-0.8 | Counterfactual analysis |
Model Configuration¶
Recommended Models¶
High-Quality Models (complex reasoning):
gpt-oss:20b- Best overall performancellama3.1:70b- Excellent reasoningmixtral:8x7b- Good balance
Fast Models (quick processing):
medgemma4B_it_q8:latest- Medical domain optimizedllama3.1:8b- General purpose, fastmistral:7b- Efficient processing
Model Installation¶
# Install recommended models
ollama pull gpt-oss:20b
ollama pull medgemma4B_it_q8:latest
ollama pull llama3.1:8b
# List installed models
ollama list
Model Assignment Examples¶
High-Performance Setup:
{
"query_agent": {"model": "gpt-oss:20b"},
"scoring_agent": {"model": "medgemma4B_it_q8:latest"},
"citation_agent": {"model": "llama3.1:8b"},
"reporting_agent": {"model": "gpt-oss:20b"}
}
Fast Setup (testing/development):
{
"query_agent": {"model": "medgemma4B_it_q8:latest"},
"scoring_agent": {"model": "medgemma4B_it_q8:latest"},
"citation_agent": {"model": "medgemma4B_it_q8:latest"},
"reporting_agent": {"model": "llama3.1:8b"}
}
Configuration Methods¶
1. Configuration GUI (Recommended)¶
Features:
- Tabbed interface for each agent
- Model selection with live refresh
- Connection testing
- Parameter validation
2. Manual File Editing¶
3. Command Line Arguments¶
# Override specific settings
uv run python bmlibrarian_cli.py --max-results 50 --score-threshold 3.0
Backup and Migration¶
Backup Configuration¶
# Create backup
cp ~/.bmlibrarian/config.json ~/.bmlibrarian/config_backup_$(date +%Y%m%d).json
# Restore backup
cp ~/.bmlibrarian/config_backup_20240315.json ~/.bmlibrarian/config.json
Reset to Defaults¶
rm ~/.bmlibrarian/gui_config.json
rm ~/.bmlibrarian/config.json
# Defaults will be recreated on next launch
Security Best Practices¶
- Use environment variables for passwords
- Avoid committing credentials to version control
- Set secure file permissions:
chmod 600 ~/.bmlibrarian/config.json - Use HTTPS for remote Ollama servers