Improved Biomni support

This commit is contained in:
Timothy Kassis
2025-10-22 08:38:06 -07:00
parent 71a3c3750f
commit 77822efeed
9 changed files with 2512 additions and 3393 deletions

View File

@@ -1,635 +1,460 @@
# Biomni API Reference
This document provides comprehensive API documentation for the Biomni biomedical AI agent system.
Comprehensive API documentation for the biomni framework.
## Core Classes
## A1 Agent Class
### A1 Agent
The A1 class is the primary interface for interacting with biomni.
The primary agent class for executing biomedical research tasks.
#### Initialization
### Initialization
```python
from biomni.agent import A1
agent = A1(
path='./data', # Path to biomedical knowledge base
llm='claude-sonnet-4-20250514', # LLM model identifier
timeout=None, # Optional timeout in seconds
verbose=True # Enable detailed logging
path: str, # Path to data lake directory
llm: str, # LLM model identifier
verbose: bool = True, # Enable verbose logging
mcp_config: str = None # Path to MCP server configuration
)
```
**Parameters:**
- `path` (str, required): Directory path where the biomedical knowledge base is stored or will be downloaded. First-time initialization will download ~11GB of data.
- `llm` (str, optional): LLM model identifier. Defaults to the value in `default_config.llm`. Supports multiple providers (see LLM Providers section).
- `timeout` (int, optional): Maximum execution time in seconds for agent operations. Overrides `default_config.timeout_seconds`.
- `verbose` (bool, optional): Enable verbose logging for debugging. Default: True.
- **`path`** (str, required) - Directory path for biomni data lake (~11GB). Data is automatically downloaded on first use if not present.
**Returns:** A1 agent instance ready for task execution.
- **`llm`** (str, required) - LLM model identifier. Options include:
- `'claude-sonnet-4-20250514'` - Recommended for balanced performance
- `'claude-opus-4-20250514'` - Maximum capability
- `'gpt-4'`, `'gpt-4-turbo'` - OpenAI models
- `'gemini-2.0-flash-exp'` - Google Gemini
- `'llama-3.3-70b-versatile'` - Via Groq
- Custom model endpoints via provider configuration
#### Methods
- **`verbose`** (bool, optional, default=True) - Enable detailed logging of agent reasoning, tool use, and code execution.
##### `go(task_description: str) -> None`
- **`mcp_config`** (str, optional) - Path to MCP (Model Context Protocol) server configuration file for external tool integration.
**Example:**
```python
# Basic initialization
agent = A1(path='./biomni_data', llm='claude-sonnet-4-20250514')
# With MCP integration
agent = A1(
path='./biomni_data',
llm='claude-sonnet-4-20250514',
mcp_config='./.biomni/mcp_config.json'
)
```
### Core Methods
#### `go(query: str) -> str`
Execute a biomedical research task autonomously.
```python
agent.go("Analyze this scRNA-seq dataset and identify cell types")
result = agent.go(query: str)
```
**Parameters:**
- `task_description` (str, required): Natural language description of the biomedical task to execute. Be specific about:
- Data location and format
- Desired analysis or output
- Any specific methods or parameters
- Expected results format
- **`query`** (str) - Natural language description of the biomedical task to execute
**Returns:**
- **`str`** - Final answer or analysis result from the agent
**Behavior:**
1. Decomposes the task into executable steps
2. Retrieves relevant biomedical knowledge from the data lake
3. Generates and executes Python/R code
4. Provides results and visualizations
5. Handles errors and retries with refinement
1. Decomposes query into executable sub-tasks
2. Retrieves relevant knowledge from integrated databases
3. Generates and executes Python code for analysis
4. Iterates on results until task completion
5. Returns final synthesized answer
**Notes:**
- Executes code with system privileges - use in sandboxed environments
- Long-running tasks may require timeout adjustments
- Intermediate results are displayed during execution
**Example:**
```python
result = agent.go("""
Identify genes associated with Alzheimer's disease from GWAS data.
Perform pathway enrichment analysis on top hits.
""")
print(result)
```
##### `save_conversation_history(output_path: str, format: str = 'pdf') -> None`
#### `save_conversation_history(output_path: str, format: str = 'pdf')`
Export conversation history and execution trace as a formatted report.
Save complete conversation history including task, reasoning, code, and results.
```python
agent.save_conversation_history(
output_path='./reports/analysis_log.pdf',
format='pdf'
output_path: str,
format: str = 'pdf'
)
```
**Parameters:**
- `output_path` (str, required): File path for the output report
- `format` (str, optional): Output format. Options: 'pdf', 'markdown'. Default: 'pdf'
- **`output_path`** (str) - File path for saved report
- **`format`** (str, optional, default='pdf') - Output format: `'pdf'`, `'html'`, or `'markdown'`
**Requirements:**
- For PDF: Install one of: WeasyPrint, markdown2pdf, or Pandoc
```bash
pip install weasyprint # Recommended
# or
pip install markdown2pdf
# or install Pandoc system-wide
```
**Example:**
```python
agent.save_conversation_history('reports/alzheimers_gwas_analysis.pdf')
```
**Report Contents:**
- Task description and parameters
- Retrieved biomedical knowledge
- Generated code with execution traces
- Results, visualizations, and outputs
- Timestamps and execution metadata
#### `reset()`
##### `add_mcp(config_path: str) -> None`
Add Model Context Protocol (MCP) tools to extend agent capabilities.
Reset agent state and clear conversation history.
```python
agent.add_mcp(config_path='./mcp_tools_config.json')
agent.reset()
```
**Parameters:**
- `config_path` (str, required): Path to MCP configuration JSON file
Use when starting a new independent task to clear previous context.
**MCP Configuration Format:**
```json
{
"tools": [
{
"name": "tool_name",
"endpoint": "http://localhost:8000/tool",
"description": "Tool description for LLM",
"parameters": {
"param1": "string",
"param2": "integer"
}
}
]
}
**Example:**
```python
# Task 1
agent.go("Analyze dataset A")
agent.save_conversation_history("task1.pdf")
# Reset for fresh context
agent.reset()
# Task 2 - independent of Task 1
agent.go("Analyze dataset B")
```
**Use Cases:**
- Connect to laboratory information systems
- Integrate proprietary databases
- Access specialized computational resources
- Link to institutional data repositories
### Configuration via default_config
## Configuration
### default_config
Global configuration object for Biomni settings.
Global configuration parameters accessible via `biomni.config.default_config`.
```python
from biomni.config import default_config
```
#### Attributes
##### `llm: str`
Default LLM model identifier for all agent instances.
```python
# LLM Configuration
default_config.llm = "claude-sonnet-4-20250514"
```
default_config.llm_temperature = 0.7
**Supported Models:**
**Anthropic:**
- `claude-sonnet-4-20250514` (Recommended)
- `claude-opus-4-20250514`
- `claude-3-5-sonnet-20241022`
- `claude-3-opus-20240229`
**OpenAI:**
- `gpt-4o`
- `gpt-4`
- `gpt-4-turbo`
- `gpt-3.5-turbo`
**Azure OpenAI:**
- `azure/gpt-4`
- `azure/<deployment-name>`
**Google Gemini:**
- `gemini/gemini-pro`
- `gemini/gemini-1.5-pro`
**Groq:**
- `groq/llama-3.1-70b-versatile`
- `groq/mixtral-8x7b-32768`
**Ollama (Local):**
- `ollama/llama3`
- `ollama/mistral`
- `ollama/<model-name>`
**AWS Bedrock:**
- `bedrock/anthropic.claude-v2`
- `bedrock/anthropic.claude-3-sonnet`
**Custom/Biomni-R0:**
- `openai/biomni-r0` (requires local SGLang deployment)
##### `timeout_seconds: int`
Default timeout for agent operations in seconds.
```python
# Execution Parameters
default_config.timeout_seconds = 1200 # 20 minutes
default_config.max_iterations = 50 # Max reasoning loops
default_config.max_tokens = 4096 # Max tokens per LLM call
# Code Execution
default_config.enable_code_execution = True
default_config.sandbox_mode = False # Enable for restricted execution
# Data and Caching
default_config.data_cache_dir = "./biomni_cache"
default_config.enable_caching = True
```
**Recommended Values:**
- Simple tasks (QC, basic analysis): 300-600 seconds
- Medium tasks (differential expression, clustering): 600-1200 seconds
- Complex tasks (full pipelines, ML models): 1200-3600 seconds
- Very complex tasks: 3600+ seconds
**Key Parameters:**
##### `data_path: str`
- **`timeout_seconds`** (int, default=1200) - Maximum time for task execution. Increase for complex analyses.
Default path to biomedical knowledge base.
- **`max_iterations`** (int, default=50) - Maximum agent reasoning loops. Prevents infinite loops.
- **`enable_code_execution`** (bool, default=True) - Allow agent to execute generated code. Disable for code generation only.
- **`sandbox_mode`** (bool, default=False) - Enable sandboxed code execution (requires additional setup).
## BiomniEval1 Evaluation Framework
Framework for benchmarking agent performance on biomedical tasks.
### Initialization
```python
default_config.data_path = "/path/to/biomni/data"
from biomni.eval import BiomniEval1
evaluator = BiomniEval1(
dataset_path: str = None, # Path to evaluation dataset
metrics: list = None # Evaluation metrics to compute
)
```
**Storage Requirements:**
- Initial download: ~11GB
- Extracted size: ~15GB
- Additional working space: ~5-10GB recommended
##### `api_base: str`
Custom API endpoint for LLM providers (advanced usage).
**Example:**
```python
# For local Biomni-R0 deployment
default_config.api_base = "http://localhost:30000/v1"
# For custom OpenAI-compatible endpoints
default_config.api_base = "https://your-endpoint.com/v1"
evaluator = BiomniEval1()
```
##### `max_retries: int`
### Methods
Number of retry attempts for failed operations.
#### `evaluate(task_type: str, instance_id: str, answer: str) -> float`
Evaluate agent answer against ground truth.
```python
default_config.max_retries = 3
```
#### Methods
##### `reset() -> None`
Reset all configuration values to system defaults.
```python
default_config.reset()
```
## Database Query System
Biomni includes a retrieval-augmented generation (RAG) system for querying the biomedical knowledge base.
### Query Functions
#### `query_genes(query: str, top_k: int = 10) -> List[Dict]`
Query gene information from integrated databases.
```python
from biomni.database import query_genes
results = query_genes(
query="genes involved in p53 pathway",
top_k=20
score = evaluator.evaluate(
task_type: str, # Task category
instance_id: str, # Specific task instance
answer: str # Agent-generated answer
)
```
**Parameters:**
- `query` (str): Natural language or gene identifier query
- `top_k` (int): Number of results to return
- **`task_type`** (str) - Task category: `'crispr_design'`, `'scrna_analysis'`, `'gwas_interpretation'`, `'drug_admet'`, `'clinical_diagnosis'`
- **`instance_id`** (str) - Unique identifier for task instance from dataset
- **`answer`** (str) - Agent's answer to evaluate
**Returns:** List of dictionaries containing:
- `gene_symbol`: Official gene symbol
- `gene_name`: Full gene name
- `description`: Functional description
- `pathways`: Associated biological pathways
- `go_terms`: Gene Ontology annotations
- `diseases`: Associated diseases
- `similarity_score`: Relevance score (0-1)
**Returns:**
- **`float`** - Evaluation score (0.0 to 1.0)
#### `query_proteins(query: str, top_k: int = 10) -> List[Dict]`
**Example:**
```python
# Generate answer
result = agent.go("Design CRISPR screen for autophagy genes")
Query protein information from UniProt and other sources.
# Evaluate
score = evaluator.evaluate(
task_type='crispr_design',
instance_id='autophagy_001',
answer=result
)
print(f"Score: {score:.2f}")
```
#### `load_dataset() -> dict`
Load the Biomni-Eval1 benchmark dataset.
```python
from biomni.database import query_proteins
dataset = evaluator.load_dataset()
```
results = query_proteins(
query="kinase proteins in cell cycle",
top_k=15
**Returns:**
- **`dict`** - Dictionary with task instances organized by task type
**Example:**
```python
dataset = evaluator.load_dataset()
for task_type, instances in dataset.items():
print(f"{task_type}: {len(instances)} instances")
```
#### `run_benchmark(agent: A1, task_types: list = None) -> dict`
Run full benchmark evaluation on agent.
```python
results = evaluator.run_benchmark(
agent: A1,
task_types: list = None # Specific task types or None for all
)
```
**Returns:** List of dictionaries with protein metadata:
- `uniprot_id`: UniProt accession
- `protein_name`: Protein name
- `function`: Functional annotation
- `domains`: Protein domains
- `subcellular_location`: Cellular localization
- `similarity_score`: Relevance score
#### `query_drugs(query: str, top_k: int = 10) -> List[Dict]`
Query drug and compound information.
**Returns:**
- **`dict`** - Results with scores, timing, and detailed metrics per task
**Example:**
```python
from biomni.database import query_drugs
results = query_drugs(
query="FDA approved cancer drugs targeting EGFR",
top_k=10
results = evaluator.run_benchmark(
agent=agent,
task_types=['crispr_design', 'scrna_analysis']
)
print(f"Overall accuracy: {results['mean_score']:.2f}")
print(f"Average time: {results['mean_time']:.1f}s")
```
**Returns:** Drug information including:
- `drug_name`: Common name
- `drugbank_id`: DrugBank identifier
- `indication`: Therapeutic indication
- `mechanism`: Mechanism of action
- `targets`: Molecular targets
- `approval_status`: Regulatory status
- `smiles`: Chemical structure (SMILES notation)
## Data Lake API
#### `query_diseases(query: str, top_k: int = 10) -> List[Dict]`
Access integrated biomedical databases programmatically.
Query disease information from clinical databases.
### Gene Database Queries
```python
from biomni.database import query_diseases
from biomni.data import GeneDB
results = query_diseases(
query="autoimmune diseases affecting joints",
top_k=10
)
gene_db = GeneDB(path='./biomni_data')
# Query gene information
gene_info = gene_db.get_gene('BRCA1')
# Returns: {'symbol': 'BRCA1', 'name': '...', 'function': '...', ...}
# Search genes by pathway
pathway_genes = gene_db.search_by_pathway('DNA repair')
# Returns: List of gene symbols in pathway
# Get gene interactions
interactions = gene_db.get_interactions('TP53')
# Returns: List of interacting genes with interaction types
```
**Returns:** Disease data:
- `disease_name`: Standard disease name
- `disease_id`: Ontology identifier
- `symptoms`: Clinical manifestations
- `associated_genes`: Genetic associations
- `prevalence`: Epidemiological data
#### `query_pathways(query: str, top_k: int = 10) -> List[Dict]`
Query biological pathways from KEGG, Reactome, and other sources.
### Protein Structure Access
```python
from biomni.database import query_pathways
from biomni.data import ProteinDB
results = query_pathways(
query="immune response signaling pathways",
top_k=15
)
protein_db = ProteinDB(path='./biomni_data')
# Get AlphaFold structure
structure = protein_db.get_structure('P38398') # BRCA1 UniProt ID
# Returns: Path to PDB file or structure object
# Search PDB database
pdb_entries = protein_db.search_pdb('kinase', resolution_max=2.5)
# Returns: List of PDB IDs matching criteria
```
**Returns:** Pathway information:
- `pathway_name`: Pathway name
- `pathway_id`: Database identifier
- `genes`: Genes in pathway
- `description`: Functional description
- `source`: Database source (KEGG, Reactome, etc.)
## Data Structures
### TaskResult
Result object returned by complex agent operations.
### Clinical Data Access
```python
class TaskResult:
success: bool # Whether task completed successfully
output: Any # Task output (varies by task)
code: str # Generated code
execution_time: float # Execution time in seconds
error: Optional[str] # Error message if failed
metadata: Dict # Additional metadata
from biomni.data import ClinicalDB
clinical_db = ClinicalDB(path='./biomni_data')
# Query ClinVar variants
variant_info = clinical_db.get_variant('rs429358') # APOE4 variant
# Returns: {'significance': '...', 'disease': '...', 'frequency': ...}
# Search OMIM for disease
disease_info = clinical_db.search_omim('Alzheimer')
# Returns: List of OMIM entries with gene associations
```
### BiomedicalEntity
Base class for biomedical entities in the knowledge base.
### Literature Search
```python
class BiomedicalEntity:
entity_id: str # Unique identifier
entity_type: str # Type (gene, protein, drug, etc.)
name: str # Entity name
description: str # Description
attributes: Dict # Additional attributes
references: List[str] # Literature references
from biomni.data import LiteratureDB
lit_db = LiteratureDB(path='./biomni_data')
# Search PubMed abstracts
papers = lit_db.search('CRISPR screening cancer', max_results=10)
# Returns: List of paper dictionaries with titles, abstracts, PMIDs
# Get citations for paper
citations = lit_db.get_citations('PMID:12345678')
# Returns: List of citing papers
```
## Utility Functions
## MCP Server Integration
### `download_data(path: str, force: bool = False) -> None`
Extend biomni with external tools via Model Context Protocol.
Manually download or update the biomedical knowledge base.
### Configuration Format
Create `.biomni/mcp_config.json`:
```json
{
"servers": {
"fda-drugs": {
"command": "python",
"args": ["-m", "mcp_server_fda"],
"env": {
"FDA_API_KEY": "${FDA_API_KEY}"
}
},
"web-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
}
}
}
```
### Using MCP Tools in Tasks
```python
from biomni.utils import download_data
download_data(
# Initialize with MCP config
agent = A1(
path='./data',
force=True # Force re-download
llm='claude-sonnet-4-20250514',
mcp_config='./.biomni/mcp_config.json'
)
```
### `validate_environment() -> Dict[str, bool]`
Check if the environment is properly configured.
```python
from biomni.utils import validate_environment
status = validate_environment()
# Returns: {
# 'conda_env': True,
# 'api_keys': True,
# 'data_available': True,
# 'dependencies': True
# }
```
### `list_available_models() -> List[str]`
Get a list of available LLM models based on configured API keys.
```python
from biomni.utils import list_available_models
models = list_available_models()
# Returns: ['claude-sonnet-4-20250514', 'gpt-4o', ...]
# Agent can now use MCP tools automatically
result = agent.go("""
Search for FDA-approved drugs targeting EGFR.
Get their approval dates and indications.
""")
# Agent uses fda-drugs MCP server automatically
```
## Error Handling
### Common Exceptions
#### `BiomniConfigError`
Raised when configuration is invalid or incomplete.
Common exceptions and handling strategies:
```python
from biomni.exceptions import BiomniConfigError
from biomni.exceptions import (
BiomniException,
LLMError,
CodeExecutionError,
DataNotFoundError,
TimeoutError
)
try:
agent = A1(path='./data')
except BiomniConfigError as e:
print(f"Configuration error: {e}")
```
#### `BiomniExecutionError`
Raised when code generation or execution fails.
```python
from biomni.exceptions import BiomniExecutionError
try:
agent.go("invalid task")
except BiomniExecutionError as e:
print(f"Execution failed: {e}")
# Access failed code: e.code
# Access error details: e.details
```
#### `BiomniDataError`
Raised when knowledge base or data access fails.
```python
from biomni.exceptions import BiomniDataError
try:
results = query_genes("unknown query format")
except BiomniDataError as e:
print(f"Data access error: {e}")
```
#### `BiomniTimeoutError`
Raised when operations exceed timeout limit.
```python
from biomni.exceptions import BiomniTimeoutError
try:
agent.go("very complex long-running task")
except BiomniTimeoutError as e:
print(f"Task timed out after {e.duration} seconds")
# Partial results may be available: e.partial_results
result = agent.go("Complex biomedical task")
except TimeoutError:
# Task exceeded timeout_seconds
print("Task timed out. Consider increasing timeout.")
default_config.timeout_seconds = 3600
except CodeExecutionError as e:
# Generated code failed to execute
print(f"Code execution error: {e}")
# Review generated code in conversation history
except DataNotFoundError:
# Required data not in data lake
print("Data not found. Ensure data lake is downloaded.")
except LLMError as e:
# LLM API error
print(f"LLM error: {e}")
# Check API keys and rate limits
```
## Best Practices
### Efficient Knowledge Retrieval
### Efficient API Usage
Pre-query databases for relevant context before complex tasks:
1. **Reuse agent instances** for related tasks to maintain context
2. **Set appropriate timeouts** based on task complexity
3. **Use caching** to avoid redundant data downloads
4. **Monitor iterations** to detect reasoning loops early
### Production Deployment
```python
from biomni.database import query_genes, query_pathways
from biomni.agent import A1
from biomni.config import default_config
import logging
# Gather relevant biological context first
genes = query_genes("cell cycle genes", top_k=50)
pathways = query_pathways("cell cycle regulation", top_k=20)
# Configure logging
logging.basicConfig(level=logging.INFO)
# Then execute task with enriched context
agent.go(f"""
Analyze the cell cycle progression in this dataset.
Focus on these genes: {[g['gene_symbol'] for g in genes]}
Consider these pathways: {[p['pathway_name'] for p in pathways]}
""")
```
# Production settings
default_config.timeout_seconds = 3600
default_config.max_iterations = 100
default_config.sandbox_mode = True # Enable sandboxing
### Error Recovery
Implement robust error handling for production workflows:
```python
from biomni.exceptions import BiomniExecutionError, BiomniTimeoutError
max_attempts = 3
for attempt in range(max_attempts):
try:
agent.go("complex biomedical task")
break
except BiomniTimeoutError:
# Increase timeout and retry
default_config.timeout_seconds *= 2
print(f"Timeout, retrying with {default_config.timeout_seconds}s timeout")
except BiomniExecutionError as e:
# Refine task based on error
print(f"Execution failed: {e}, refining task...")
# Optionally modify task description
else:
print("Task failed after max attempts")
# Initialize with error handling
try:
agent = A1(path='/data/biomni', llm='claude-sonnet-4-20250514')
result = agent.go(task_query)
agent.save_conversation_history(f'reports/{task_id}.pdf')
except Exception as e:
logging.error(f"Task {task_id} failed: {e}")
# Handle failure appropriately
```
### Memory Management
For large-scale analyses, manage memory explicitly:
For large-scale analyses:
```python
import gc
# Process datasets in chunks
for chunk_id in range(num_chunks):
agent.go(f"Process data chunk {chunk_id} located at data/chunk_{chunk_id}.h5ad")
chunk_results = []
for chunk in dataset_chunks:
agent.reset() # Clear memory between chunks
result = agent.go(f"Analyze chunk: {chunk}")
chunk_results.append(result)
# Force garbage collection between chunks
gc.collect()
# Save intermediate results
agent.save_conversation_history(f"./reports/chunk_{chunk_id}.pdf")
```
### Reproducibility
Ensure reproducible analyses by:
1. **Fixing random seeds:**
```python
agent.go("Set random seed to 42 for all analyses, then perform clustering...")
```
2. **Logging configuration:**
```python
import json
config_log = {
'llm': default_config.llm,
'timeout': default_config.timeout_seconds,
'data_path': default_config.data_path,
'timestamp': datetime.now().isoformat()
}
with open('config_log.json', 'w') as f:
json.dump(config_log, f, indent=2)
```
3. **Saving execution traces:**
```python
# Always save detailed reports
agent.save_conversation_history('./reports/full_analysis.pdf')
```
## Performance Optimization
### Model Selection Strategy
Choose models based on task characteristics:
```python
# For exploratory, simple tasks
default_config.llm = "gpt-3.5-turbo" # Fast, cost-effective
# For standard biomedical analyses
default_config.llm = "claude-sonnet-4-20250514" # Recommended
# For complex reasoning and hypothesis generation
default_config.llm = "claude-opus-4-20250514" # Highest quality
# For specialized biological reasoning
default_config.llm = "openai/biomni-r0" # Requires local deployment
```
### Timeout Tuning
Set appropriate timeouts based on task complexity:
```python
# Quick queries and simple analyses
agent = A1(path='./data', timeout=300)
# Standard workflows
agent = A1(path='./data', timeout=1200)
# Full pipelines with ML training
agent = A1(path='./data', timeout=3600)
```
### Caching and Reuse
Reuse agent instances for multiple related tasks:
```python
# Create agent once
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
# Execute multiple related tasks
tasks = [
"Load and QC the scRNA-seq dataset",
"Perform clustering with resolution 0.5",
"Identify marker genes for each cluster",
"Annotate cell types based on markers"
]
for task in tasks:
agent.go(task)
# Save complete workflow
agent.save_conversation_history('./reports/full_workflow.pdf')
# Combine results
final_result = combine_results(chunk_results)
```

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,867 @@
# Biomni Use Cases and Examples
Comprehensive examples demonstrating biomni across biomedical research domains.
## Table of Contents
1. [CRISPR Screening and Gene Editing](#crispr-screening-and-gene-editing)
2. [Single-Cell RNA-seq Analysis](#single-cell-rna-seq-analysis)
3. [Drug Discovery and ADMET](#drug-discovery-and-admet)
4. [GWAS and Genetic Analysis](#gwas-and-genetic-analysis)
5. [Clinical Genomics and Diagnostics](#clinical-genomics-and-diagnostics)
6. [Protein Structure and Function](#protein-structure-and-function)
7. [Literature and Knowledge Synthesis](#literature-and-knowledge-synthesis)
8. [Multi-Omics Integration](#multi-omics-integration)
---
## CRISPR Screening and Gene Editing
### Example 1: Genome-Wide CRISPR Screen Design
**Task:** Design a CRISPR knockout screen to identify genes regulating autophagy.
```python
from biomni.agent import A1
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Design a genome-wide CRISPR knockout screen to identify genes regulating
autophagy in HEK293 cells.
Requirements:
1. Generate comprehensive sgRNA library targeting all protein-coding genes
2. Design 4 sgRNAs per gene with optimal on-target and minimal off-target scores
3. Include positive controls (known autophagy regulators: ATG5, BECN1, ULK1)
4. Include negative controls (non-targeting sgRNAs)
5. Prioritize genes based on:
- Existing autophagy pathway annotations
- Protein-protein interactions with known autophagy factors
- Expression levels in HEK293 cells
6. Output sgRNA sequences, scores, and gene prioritization rankings
Provide analysis as Python code and interpret results.
""")
agent.save_conversation_history("autophagy_screen_design.pdf")
```
**Expected Output:**
- sgRNA library with ~80,000 guides (4 per gene × ~20,000 genes)
- On-target and off-target scores for each sgRNA
- Prioritized gene list based on pathway enrichment
- Quality control metrics for library design
### Example 2: CRISPR Off-Target Prediction
```python
result = agent.go("""
Analyze potential off-target effects for this sgRNA sequence:
GCTGAAGATCCAGTTCGATG
Tasks:
1. Identify all genomic locations with ≤3 mismatches
2. Score each potential off-target site
3. Assess likelihood of cleavage at off-target sites
4. Recommend whether sgRNA is suitable for use
5. If unsuitable, suggest alternative sgRNAs for the same gene
""")
```
### Example 3: Screen Hit Analysis
```python
result = agent.go("""
Analyze CRISPR screen results from autophagy phenotype screen.
Input file: screen_results.csv
Columns: sgRNA_ID, gene, log2_fold_change, p_value, FDR
Tasks:
1. Identify significant hits (FDR < 0.05, |LFC| > 1.5)
2. Perform gene ontology enrichment on hit genes
3. Map hits to known autophagy pathways
4. Identify novel candidates not previously linked to autophagy
5. Predict functional relationships between hit genes
6. Generate visualization of hit genes in pathway context
""")
```
---
## Single-Cell RNA-seq Analysis
### Example 1: Cell Type Annotation
**Task:** Analyze single-cell RNA-seq data and annotate cell populations.
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Analyze single-cell RNA-seq dataset from human PBMC sample.
File: pbmc_data.h5ad (10X Genomics format)
Workflow:
1. Quality control:
- Filter cells with <200 or >5000 detected genes
- Remove cells with >20% mitochondrial content
- Filter genes detected in <3 cells
2. Normalization and preprocessing:
- Normalize to 10,000 reads per cell
- Log-transform
- Identify highly variable genes
- Scale data
3. Dimensionality reduction:
- PCA (50 components)
- UMAP visualization
4. Clustering:
- Leiden algorithm with resolution=0.8
- Identify cluster markers (Wilcoxon rank-sum test)
5. Cell type annotation:
- Annotate clusters using marker genes:
* T cells (CD3D, CD3E)
* B cells (CD79A, MS4A1)
* NK cells (GNLY, NKG7)
* Monocytes (CD14, LYZ)
* Dendritic cells (FCER1A, CST3)
6. Generate UMAP plots with annotations and export results
""")
agent.save_conversation_history("pbmc_scrna_analysis.pdf")
```
### Example 2: Differential Expression Analysis
```python
result = agent.go("""
Perform differential expression analysis between conditions in scRNA-seq data.
Data: pbmc_treated_vs_control.h5ad
Conditions: treated (drug X) vs control
Tasks:
1. Identify differentially expressed genes for each cell type
2. Use statistical tests appropriate for scRNA-seq (MAST or Wilcoxon)
3. Apply multiple testing correction (Benjamini-Hochberg)
4. Threshold: |log2FC| > 0.5, adjusted p < 0.05
5. Perform pathway enrichment on DE genes per cell type
6. Identify cell-type-specific drug responses
7. Generate volcano plots and heatmaps
""")
```
### Example 3: Trajectory Analysis
```python
result = agent.go("""
Perform pseudotime trajectory analysis on differentiation dataset.
Data: hematopoiesis_scrna.h5ad
Starting population: Hematopoietic stem cells (HSCs)
Analysis:
1. Subset to hematopoietic lineages
2. Compute diffusion map or PAGA for trajectory inference
3. Order cells along pseudotime
4. Identify genes with dynamic expression along trajectory
5. Cluster genes by expression patterns
6. Map trajectories to known differentiation pathways
7. Visualize key transcription factors driving differentiation
""")
```
---
## Drug Discovery and ADMET
### Example 1: ADMET Property Prediction
**Task:** Predict ADMET properties for drug candidates.
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Predict ADMET properties for these drug candidates:
Compounds (SMILES format):
1. CC1=C(C=C(C=C1)NC(=O)C2=CC=C(C=C2)CN3CCN(CC3)C)NC4=NC=CC(=N4)C5=CN=CC=C5
2. CN1CCN(CC1)C2=C(C=C3C(=C2)N=CN=C3NC4=CC=C(C=C4)F)OC
3. CC(C)(C)NC(=O)N(CC1=CC=CC=C1)C2CCN(CC2)C(=O)C3=CC4=C(C=C3)OCO4
For each compound, predict:
**Absorption:**
- Caco-2 permeability (cm/s)
- Human intestinal absorption (HIA %)
- Oral bioavailability
**Distribution:**
- Plasma protein binding (%)
- Blood-brain barrier penetration (BBB+/-)
- Volume of distribution (L/kg)
**Metabolism:**
- CYP450 substrate/inhibitor predictions (2D6, 3A4, 2C9, 2C19)
- Metabolic stability (T1/2)
**Excretion:**
- Clearance (mL/min/kg)
- Half-life (hours)
**Toxicity:**
- hERG IC50 (cardiotoxicity risk)
- Hepatotoxicity prediction
- Ames mutagenicity
- LD50 estimates
Provide predictions with confidence scores and flag any red flags.
""")
agent.save_conversation_history("admet_predictions.pdf")
```
### Example 2: Target Identification
```python
result = agent.go("""
Identify potential protein targets for Alzheimer's disease drug development.
Tasks:
1. Query GWAS data for Alzheimer's-associated genes
2. Identify genes with druggable domains (kinases, GPCRs, ion channels, etc.)
3. Check for brain expression patterns
4. Assess disease relevance via literature mining
5. Evaluate existing chemical probe availability
6. Rank targets by:
- Genetic evidence strength
- Druggability
- Lack of existing therapies
7. Suggest target validation experiments
""")
```
### Example 3: Virtual Screening
```python
result = agent.go("""
Perform virtual screening for EGFR kinase inhibitors.
Database: ZINC15 lead-like subset (~6M compounds)
Target: EGFR kinase domain (PDB: 1M17)
Workflow:
1. Prepare protein structure (remove waters, add hydrogens)
2. Define binding pocket (based on erlotinib binding site)
3. Generate pharmacophore model from known EGFR inhibitors
4. Filter ZINC database by:
- Molecular weight: 200-500 Da
- LogP: 0-5
- Lipinski's rule of five
- Pharmacophore match
5. Dock top 10,000 compounds
6. Score by docking energy and predicted binding affinity
7. Select top 100 for further analysis
8. Predict ADMET properties for top hits
9. Recommend top 10 compounds for experimental validation
""")
```
---
## GWAS and Genetic Analysis
### Example 1: GWAS Summary Statistics Analysis
**Task:** Interpret GWAS results and identify causal genes.
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Analyze GWAS summary statistics for Type 2 Diabetes.
Input file: t2d_gwas_summary.txt
Columns: CHR, BP, SNP, P, OR, BETA, SE, A1, A2
Analysis steps:
1. Identify genome-wide significant variants (P < 5e-8)
2. Perform LD clumping to identify independent signals
3. Map variants to genes using:
- Nearest gene
- eQTL databases (GTEx)
- Hi-C chromatin interactions
4. Prioritize causal genes using multiple evidence:
- Fine-mapping scores
- Coding variant consequences
- Gene expression in relevant tissues (pancreas, liver, adipose)
- Pathway enrichment
5. Identify druggable targets among causal genes
6. Compare with known T2D genes and highlight novel associations
7. Generate Manhattan plot, QQ plot, and gene prioritization table
""")
agent.save_conversation_history("t2d_gwas_analysis.pdf")
```
### Example 2: Polygenic Risk Score
```python
result = agent.go("""
Develop and validate polygenic risk score (PRS) for coronary artery disease (CAD).
Training GWAS: CAD_discovery_summary_stats.txt (N=180,000)
Validation cohort: CAD_validation_genotypes.vcf (N=50,000)
Tasks:
1. Select variants for PRS using p-value thresholding (P < 1e-5)
2. Perform LD clumping (r² < 0.1, 500kb window)
3. Calculate PRS weights from GWAS betas
4. Compute PRS for validation cohort individuals
5. Evaluate PRS performance:
- AUC for CAD case/control discrimination
- Odds ratios across PRS deciles
- Compare to traditional risk factors (age, sex, BMI, smoking)
6. Assess PRS calibration and create risk stratification plot
7. Identify high-risk individuals (top 5% PRS)
""")
```
### Example 3: Variant Pathogenicity Prediction
```python
result = agent.go("""
Predict pathogenicity of rare coding variants in candidate disease genes.
Variants (VCF format):
- chr17:41234451:A>G (BRCA1 p.Arg1347Gly)
- chr2:179428448:C>T (TTN p.Trp13579*)
- chr7:117188679:G>A (CFTR p.Gly542Ser)
For each variant, assess:
1. In silico predictions (SIFT, PolyPhen2, CADD, REVEL)
2. Population frequency (gnomAD)
3. Evolutionary conservation (PhyloP, PhastCons)
4. Protein structure impact (using AlphaFold structures)
5. Functional domain location
6. ClinVar annotations (if available)
7. Literature evidence
8. ACMG/AMP classification criteria
Provide pathogenicity classification (benign, likely benign, VUS, likely pathogenic, pathogenic) with supporting evidence.
""")
```
---
## Clinical Genomics and Diagnostics
### Example 1: Rare Disease Diagnosis
**Task:** Diagnose rare genetic disease from whole exome sequencing.
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Analyze whole exome sequencing (WES) data for rare disease diagnosis.
Patient phenotypes (HPO terms):
- HP:0001250 (Seizures)
- HP:0001249 (Intellectual disability)
- HP:0001263 (Global developmental delay)
- HP:0001252 (Hypotonia)
VCF file: patient_trio.vcf (proband + parents)
Analysis workflow:
1. Variant filtering:
- Quality filters (QUAL > 30, DP > 10, GQ > 20)
- Frequency filters (gnomAD AF < 0.01)
- Functional impact (missense, nonsense, frameshift, splice site)
2. Inheritance pattern analysis:
- De novo variants
- Autosomal recessive (compound het, homozygous)
- X-linked
3. Phenotype-driven prioritization:
- Match candidate genes to HPO terms
- Use HPO-gene associations
- Check gene expression in relevant tissues (brain)
4. Variant pathogenicity assessment:
- In silico predictions
- ACMG classification
- Literature evidence
5. Generate diagnostic report with:
- Top candidate variants
- Supporting evidence
- Functional validation suggestions
- Genetic counseling recommendations
""")
agent.save_conversation_history("rare_disease_diagnosis.pdf")
```
### Example 2: Cancer Genomics Analysis
```python
result = agent.go("""
Analyze tumor-normal paired sequencing for cancer genomics.
Files:
- tumor_sample.vcf (somatic variants)
- tumor_rnaseq.bam (gene expression)
- tumor_cnv.seg (copy number variants)
Analysis:
1. Identify driver mutations:
- Known cancer genes (COSMIC, OncoKB)
- Recurrent hotspot mutations
- Truncating mutations in tumor suppressors
2. Analyze mutational signatures:
- Decompose signatures (COSMIC signatures)
- Identify mutagenic processes
3. Copy number analysis:
- Identify amplifications and deletions
- Focal vs. arm-level events
- Assess oncogene amplifications and TSG deletions
4. Gene expression analysis:
- Identify outlier gene expression
- Fusion transcript detection
- Pathway dysregulation
5. Therapeutic implications:
- Match alterations to FDA-approved therapies
- Identify clinical trial opportunities
- Predict response to targeted therapies
6. Generate precision oncology report
""")
```
### Example 3: Pharmacogenomics
```python
result = agent.go("""
Generate pharmacogenomics report for patient genotype data.
VCF file: patient_pgx.vcf
Analyze variants affecting drug metabolism:
**CYP450 genes:**
- CYP2D6 (affects ~25% of drugs)
- CYP2C19 (clopidogrel, PPIs, antidepressants)
- CYP2C9 (warfarin, NSAIDs)
- CYP3A5 (tacrolimus, immunosuppressants)
**Drug transporter genes:**
- SLCO1B1 (statin myopathy risk)
- ABCB1 (P-glycoprotein)
**Drug targets:**
- VKORC1 (warfarin dosing)
- DPYD (fluoropyrimidine toxicity)
- TPMT (thiopurine toxicity)
For each gene:
1. Determine diplotype (*1/*1, *1/*2, etc.)
2. Assign metabolizer phenotype (PM, IM, NM, RM, UM)
3. Provide dosing recommendations using CPIC/PharmGKB guidelines
4. Flag high-risk drug-gene interactions
5. Suggest alternative medications if needed
Generate patient-friendly report with actionable recommendations.
""")
```
---
## Protein Structure and Function
### Example 1: AlphaFold Structure Analysis
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Analyze AlphaFold structure prediction for novel protein.
Protein: Hypothetical protein ABC123 (UniProt: Q9XYZ1)
Tasks:
1. Retrieve AlphaFold structure from database
2. Assess prediction quality:
- pLDDT scores per residue
- Identify high-confidence regions (pLDDT > 90)
- Flag low-confidence regions (pLDDT < 50)
3. Structural analysis:
- Identify domains using structural alignment
- Predict fold family
- Identify secondary structure elements
4. Functional prediction:
- Search for structural homologs in PDB
- Identify conserved functional sites
- Predict binding pockets
- Suggest possible ligands/substrates
5. Variant impact analysis:
- Map disease-associated variants to structure
- Predict structural consequences
- Identify variants affecting binding sites
6. Generate PyMOL visualization scripts highlighting key features
""")
agent.save_conversation_history("alphafold_analysis.pdf")
```
### Example 2: Protein-Protein Interaction Prediction
```python
result = agent.go("""
Predict and analyze protein-protein interactions for autophagy pathway.
Query proteins: ATG5, ATG12, ATG16L1
Analysis:
1. Retrieve known interactions from:
- STRING database
- BioGRID
- IntAct
- Literature mining
2. Predict novel interactions using:
- Structural modeling (AlphaFold-Multimer)
- Coexpression analysis
- Phylogenetic profiling
3. Analyze interaction interfaces:
- Identify binding residues
- Assess interface properties (area, hydrophobicity)
- Predict binding affinity
4. Functional analysis:
- Map interactions to autophagy pathway steps
- Identify regulatory interactions
- Predict complex stoichiometry
5. Therapeutic implications:
- Identify druggable interfaces
- Suggest peptide inhibitors
- Design disruption strategies
Generate network visualization and interaction details.
""")
```
---
## Literature and Knowledge Synthesis
### Example 1: Systematic Literature Review
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Perform systematic literature review on CRISPR base editing applications.
Search query: "CRISPR base editing" OR "base editor" OR "CBE" OR "ABE"
Date range: 2016-2025
Tasks:
1. Search PubMed and retrieve relevant abstracts
2. Filter for original research articles
3. Extract key information:
- Base editor type (CBE, ABE, dual)
- Target organism/cell type
- Application (disease model, therapy, crop improvement)
- Editing efficiency
- Off-target assessment
4. Categorize applications:
- Therapeutic applications (by disease)
- Agricultural applications
- Basic research
5. Analyze trends:
- Publications over time
- Most studied diseases
- Evolution of base editor technology
6. Synthesize findings:
- Clinical trial status
- Remaining challenges
- Future directions
Generate comprehensive review document with citation statistics.
""")
agent.save_conversation_history("crispr_base_editing_review.pdf")
```
### Example 2: Gene Function Synthesis
```python
result = agent.go("""
Synthesize knowledge about gene function from multiple sources.
Target gene: PARK7 (DJ-1)
Integrate information from:
1. **Genetic databases:**
- NCBI Gene
- UniProt
- OMIM
2. **Expression data:**
- GTEx tissue expression
- Human Protein Atlas
- Single-cell expression atlases
3. **Functional data:**
- GO annotations
- KEGG pathways
- Reactome
- Protein interactions (STRING)
4. **Disease associations:**
- ClinVar variants
- GWAS catalog
- Disease databases (DisGeNET)
5. **Literature:**
- PubMed abstracts
- Key mechanistic studies
- Review articles
Synthesize into comprehensive gene report:
- Molecular function
- Biological processes
- Cellular localization
- Tissue distribution
- Disease associations
- Known drug targets/inhibitors
- Unresolved questions
Generate structured summary suitable for research planning.
""")
```
---
## Multi-Omics Integration
### Example 1: Multi-Omics Disease Analysis
```python
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
result = agent.go("""
Integrate multi-omics data to understand disease mechanism.
Disease: Alzheimer's disease
Data types:
- Genomics: GWAS summary statistics (gwas_ad.txt)
- Transcriptomics: Brain RNA-seq (controls vs AD, rnaseq_data.csv)
- Proteomics: CSF proteomics (proteomics_csf.csv)
- Metabolomics: Plasma metabolomics (metabolomics_plasma.csv)
- Epigenomics: Brain methylation array (methylation_data.csv)
Integration workflow:
1. Analyze each omics layer independently:
- Identify significantly altered features
- Perform pathway enrichment
2. Cross-omics correlation:
- Correlate gene expression with protein levels
- Link genetic variants to expression (eQTL)
- Associate methylation with gene expression
- Connect proteins to metabolites
3. Network analysis:
- Build multi-omics network
- Identify key hub genes/proteins
- Detect disease modules
4. Causal inference:
- Prioritize drivers vs. consequences
- Identify therapeutic targets
- Predict drug mechanisms
5. Generate integrative model of AD pathogenesis
Provide visualization and therapeutic target recommendations.
""")
agent.save_conversation_history("ad_multiomics_analysis.pdf")
```
### Example 2: Systems Biology Modeling
```python
result = agent.go("""
Build systems biology model of metabolic pathway.
Pathway: Glycolysis
Data sources:
- Enzyme kinetics (BRENDA database)
- Metabolite concentrations (literature)
- Gene expression (tissue-specific, GTEx)
- Flux measurements (C13 labeling studies)
Modeling tasks:
1. Construct pathway model:
- Define reactions and stoichiometry
- Parameterize enzyme kinetics (Km, Vmax, Ki)
- Set initial metabolite concentrations
2. Simulate pathway dynamics:
- Steady-state analysis
- Time-course simulations
- Sensitivity analysis
3. Constraint-based modeling:
- Flux balance analysis (FBA)
- Identify bottleneck reactions
- Predict metabolic engineering strategies
4. Integrate with gene expression:
- Tissue-specific model predictions
- Disease vs. normal comparisons
5. Therapeutic predictions:
- Enzyme inhibition effects
- Metabolic rescue strategies
- Drug target identification
Generate model in SBML format and simulation results.
""")
```
---
## Best Practices for Task Formulation
### 1. Be Specific and Detailed
**Poor:**
```python
agent.go("Analyze this RNA-seq data")
```
**Good:**
```python
agent.go("""
Analyze bulk RNA-seq data from cancer vs. normal samples.
Files: cancer_rnaseq.csv (TPM values, 50 cancer, 50 normal)
Tasks:
1. Differential expression (DESeq2, padj < 0.05, |log2FC| > 1)
2. Pathway enrichment (KEGG, Reactome)
3. Generate volcano plot and top DE gene heatmap
""")
```
### 2. Include File Paths and Formats
Always specify:
- Exact file paths
- File formats (VCF, BAM, CSV, H5AD, etc.)
- Data structure (columns, sample IDs)
### 3. Set Clear Success Criteria
Define thresholds and cutoffs:
- Statistical significance (P < 0.05, FDR < 0.1)
- Fold change thresholds
- Quality filters
- Expected outputs
### 4. Request Visualizations
Explicitly ask for plots:
- Volcano plots, MA plots
- Heatmaps, PCA plots
- Network diagrams
- Manhattan plots
### 5. Specify Biological Context
Include:
- Organism (human, mouse, etc.)
- Tissue/cell type
- Disease/condition
- Treatment details
### 6. Request Interpretations
Ask agent to:
- Interpret biological significance
- Suggest follow-up experiments
- Identify limitations
- Provide literature context
---
## Common Patterns
### Data Quality Control
```python
"""
Before analysis, perform quality control:
1. Check for missing values
2. Assess data distributions
3. Identify outliers
4. Generate QC report
Only proceed with analysis if data passes QC.
"""
```
### Iterative Refinement
```python
"""
Perform analysis in stages:
1. Initial exploratory analysis
2. Based on results, refine parameters
3. Focus on interesting findings
4. Generate final report
Show intermediate results for each stage.
"""
```
### Reproducibility
```python
"""
Ensure reproducibility:
1. Set random seeds where applicable
2. Log all parameters used
3. Save intermediate files
4. Export environment info (package versions)
5. Generate methods section for paper
"""
```
These examples demonstrate the breadth of biomedical tasks biomni can handle. Adapt the patterns to your specific research questions, and always include sufficient detail for the agent to execute autonomously.