Skip to content

Workflow Guide

BMLibrarian uses an advanced enum-based workflow system to guide you through comprehensive medical literature research. This flexible system supports iterative refinement, allowing agents to request additional evidence and enabling you to enhance your research quality through multiple passes.

Workflow Overview

The BMLibrarian research workflow consists of 12 main steps:

graph TD
    A[Research Question] --> B[Query Generation]
    B --> C[Document Search]
    C --> D[Results Review]
    D --> E[Document Scoring]
    E --> F[Citation Extraction]
    F --> G[Report Generation]
    G --> H{Counterfactual?}
    H -->|Yes| I[Contradictory Search]
    I --> J[Comprehensive Report]
    H -->|No| K[Export]
    J --> K

Workflow Steps

1. Research Question Collection

What it does: Collects your medical research question

Example:

"What are the cardiovascular benefits of exercise?"

2. Query Generation & Editing

What it does: Converts your question to a database search query

User interaction: Review and optionally edit the generated PostgreSQL query

Example output:

exercise & (cardiovascular | cardiac | heart) & (health | wellness | function)

What it does: Searches the medical literature database

Typical results: 10-100 relevant medical papers

4. Search Results Review

What it does: Displays found documents for review

Options:

  • Proceed with current results
  • Refine search query
  • Adjust search parameters

5. Document Relevance Scoring

What it does: AI scores each document (1-5) for relevance

Scoring criteria:

Score Meaning
5 Highly relevant, directly addresses question
4 Very relevant, substantial related content
3 Moderately relevant, some useful information
2 Somewhat relevant, limited useful content
1 Minimally relevant, tangential information

6. Citation Extraction

What it does: Extracts specific passages that answer your question

Output: Specific quotes with document references and relevance scores

7. Report Generation

What it does: Synthesizes citations into a medical publication-style report

Report includes:

  • Executive summary
  • Detailed findings with citations
  • Evidence strength assessment
  • Limitations and caveats

8. Counterfactual Analysis (Optional)

What it does: Identifies claims and generates research questions for finding contradictory evidence

Generates:

  • Main claims identified in the report
  • Research questions to find contradictory evidence
  • Confidence level recommendations

9. Contradictory Evidence Search (Optional)

What it does: Searches for studies that might contradict the report findings

Purpose: Provides balanced perspective and identifies potential study limitations

10. Comprehensive Report Editing

What it does: Creates a balanced final report integrating all evidence

Enhanced report includes:

  • Balanced presentation of findings
  • Integration of contradictory evidence
  • Evidence quality tables
  • Confidence assessments

11. Report Review & Revision (Optional)

What it does: Allows iterative improvement of the final report

12. Report Export

What it does: Saves the final report to a markdown file

Usage Modes

Full human-in-the-loop workflow with:

  • Manual review at each step
  • Ability to refine and iterate
  • Quality control checkpoints
uv run python bmlibrarian_cli_refactored.py

Auto Mode

Automated execution with minimal interaction:

uv run python bmlibrarian_cli_refactored.py --auto "Your research question"

Quick Mode

Reduced scope for rapid testing:

  • 20 documents maximum
  • 2-minute timeout
  • Lower quality thresholds
uv run python bmlibrarian_cli_refactored.py --quick

Best Practices

Research Question Formulation

Be Specific

"Effects of high-intensity interval training on Type 2 diabetes" is better than "Exercise and diabetes"

  • Use medical terminology
  • Define scope (population, intervention, outcome)
  • Consider timeframe

Search Strategy

  1. Start broad: Begin with general terms, then refine
  2. Review results: Check if search captured intended studies
  3. Iterate: Don't hesitate to refine your query multiple times
  4. Use synonyms: Try alternative medical terminology

Quality Control

  • Review scores for consistency
  • Verify extracted passages answer your question
  • Check that cited studies are real and accessible
  • Assess evidence strength

Example Session

# Start interactive workflow
uv run python bmlibrarian_cli_refactored.py

# 1. Enter research question
"What are the effects of Mediterranean diet on cardiovascular disease prevention?"

# 2. Review and approve generated query
# AI generates: mediterranean & diet & (cardiovascular | cardiac | heart) & (disease | prevention)

# 3. Review search results
# Found 45 relevant studies

# 4. Review document scores
# 15 documents scored 3+ for relevance

# 5. Review extracted citations
# 23 high-quality citations extracted

# 6. Review generated report
# Comprehensive report with evidence synthesis

# 7. Perform counterfactual analysis
# Identified 3 potential contradictory research questions

# 8. Search contradictory evidence
# Found 2 studies with conflicting results

# 9. Review comprehensive report
# Balanced analysis including contradictory evidence

# 10. Export final report
# Saved to: Mediterranean_Diet_Cardiovascular_Prevention_Report.md

Troubleshooting

No Documents Found

  1. Check research question for typos
  2. Try more general terms
  3. Use alternative medical terminology
  4. Verify the topic is covered in the database

Low-Quality Citations

  1. Lower the relevance threshold
  2. Increase the document score threshold
  3. Refine your research question
  4. Request additional citations

Auto Mode Failures

  1. Switch to interactive mode for complex queries
  2. Verify your research question is clear
  3. Use quick mode for initial exploration