mirror of
https://github.com/K-Dense-AI/claude-scientific-skills.git
synced 2026-03-28 07:33:45 +08:00
Improved Biomni support
This commit is contained in:
@@ -1,635 +1,460 @@
|
||||
# Biomni API Reference
|
||||
|
||||
This document provides comprehensive API documentation for the Biomni biomedical AI agent system.
|
||||
Comprehensive API documentation for the biomni framework.
|
||||
|
||||
## Core Classes
|
||||
## A1 Agent Class
|
||||
|
||||
### A1 Agent
|
||||
The A1 class is the primary interface for interacting with biomni.
|
||||
|
||||
The primary agent class for executing biomedical research tasks.
|
||||
|
||||
#### Initialization
|
||||
### Initialization
|
||||
|
||||
```python
|
||||
from biomni.agent import A1
|
||||
|
||||
agent = A1(
|
||||
path='./data', # Path to biomedical knowledge base
|
||||
llm='claude-sonnet-4-20250514', # LLM model identifier
|
||||
timeout=None, # Optional timeout in seconds
|
||||
verbose=True # Enable detailed logging
|
||||
path: str, # Path to data lake directory
|
||||
llm: str, # LLM model identifier
|
||||
verbose: bool = True, # Enable verbose logging
|
||||
mcp_config: str = None # Path to MCP server configuration
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `path` (str, required): Directory path where the biomedical knowledge base is stored or will be downloaded. First-time initialization will download ~11GB of data.
|
||||
- `llm` (str, optional): LLM model identifier. Defaults to the value in `default_config.llm`. Supports multiple providers (see LLM Providers section).
|
||||
- `timeout` (int, optional): Maximum execution time in seconds for agent operations. Overrides `default_config.timeout_seconds`.
|
||||
- `verbose` (bool, optional): Enable verbose logging for debugging. Default: True.
|
||||
- **`path`** (str, required) - Directory path for biomni data lake (~11GB). Data is automatically downloaded on first use if not present.
|
||||
|
||||
**Returns:** A1 agent instance ready for task execution.
|
||||
- **`llm`** (str, required) - LLM model identifier. Options include:
|
||||
- `'claude-sonnet-4-20250514'` - Recommended for balanced performance
|
||||
- `'claude-opus-4-20250514'` - Maximum capability
|
||||
- `'gpt-4'`, `'gpt-4-turbo'` - OpenAI models
|
||||
- `'gemini-2.0-flash-exp'` - Google Gemini
|
||||
- `'llama-3.3-70b-versatile'` - Via Groq
|
||||
- Custom model endpoints via provider configuration
|
||||
|
||||
#### Methods
|
||||
- **`verbose`** (bool, optional, default=True) - Enable detailed logging of agent reasoning, tool use, and code execution.
|
||||
|
||||
##### `go(task_description: str) -> None`
|
||||
- **`mcp_config`** (str, optional) - Path to MCP (Model Context Protocol) server configuration file for external tool integration.
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# Basic initialization
|
||||
agent = A1(path='./biomni_data', llm='claude-sonnet-4-20250514')
|
||||
|
||||
# With MCP integration
|
||||
agent = A1(
|
||||
path='./biomni_data',
|
||||
llm='claude-sonnet-4-20250514',
|
||||
mcp_config='./.biomni/mcp_config.json'
|
||||
)
|
||||
```
|
||||
|
||||
### Core Methods
|
||||
|
||||
#### `go(query: str) -> str`
|
||||
|
||||
Execute a biomedical research task autonomously.
|
||||
|
||||
```python
|
||||
agent.go("Analyze this scRNA-seq dataset and identify cell types")
|
||||
result = agent.go(query: str)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `task_description` (str, required): Natural language description of the biomedical task to execute. Be specific about:
|
||||
- Data location and format
|
||||
- Desired analysis or output
|
||||
- Any specific methods or parameters
|
||||
- Expected results format
|
||||
- **`query`** (str) - Natural language description of the biomedical task to execute
|
||||
|
||||
**Returns:**
|
||||
- **`str`** - Final answer or analysis result from the agent
|
||||
|
||||
**Behavior:**
|
||||
1. Decomposes the task into executable steps
|
||||
2. Retrieves relevant biomedical knowledge from the data lake
|
||||
3. Generates and executes Python/R code
|
||||
4. Provides results and visualizations
|
||||
5. Handles errors and retries with refinement
|
||||
1. Decomposes query into executable sub-tasks
|
||||
2. Retrieves relevant knowledge from integrated databases
|
||||
3. Generates and executes Python code for analysis
|
||||
4. Iterates on results until task completion
|
||||
5. Returns final synthesized answer
|
||||
|
||||
**Notes:**
|
||||
- Executes code with system privileges - use in sandboxed environments
|
||||
- Long-running tasks may require timeout adjustments
|
||||
- Intermediate results are displayed during execution
|
||||
**Example:**
|
||||
```python
|
||||
result = agent.go("""
|
||||
Identify genes associated with Alzheimer's disease from GWAS data.
|
||||
Perform pathway enrichment analysis on top hits.
|
||||
""")
|
||||
print(result)
|
||||
```
|
||||
|
||||
##### `save_conversation_history(output_path: str, format: str = 'pdf') -> None`
|
||||
#### `save_conversation_history(output_path: str, format: str = 'pdf')`
|
||||
|
||||
Export conversation history and execution trace as a formatted report.
|
||||
Save complete conversation history including task, reasoning, code, and results.
|
||||
|
||||
```python
|
||||
agent.save_conversation_history(
|
||||
output_path='./reports/analysis_log.pdf',
|
||||
format='pdf'
|
||||
output_path: str,
|
||||
format: str = 'pdf'
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `output_path` (str, required): File path for the output report
|
||||
- `format` (str, optional): Output format. Options: 'pdf', 'markdown'. Default: 'pdf'
|
||||
- **`output_path`** (str) - File path for saved report
|
||||
- **`format`** (str, optional, default='pdf') - Output format: `'pdf'`, `'html'`, or `'markdown'`
|
||||
|
||||
**Requirements:**
|
||||
- For PDF: Install one of: WeasyPrint, markdown2pdf, or Pandoc
|
||||
```bash
|
||||
pip install weasyprint # Recommended
|
||||
# or
|
||||
pip install markdown2pdf
|
||||
# or install Pandoc system-wide
|
||||
```
|
||||
**Example:**
|
||||
```python
|
||||
agent.save_conversation_history('reports/alzheimers_gwas_analysis.pdf')
|
||||
```
|
||||
|
||||
**Report Contents:**
|
||||
- Task description and parameters
|
||||
- Retrieved biomedical knowledge
|
||||
- Generated code with execution traces
|
||||
- Results, visualizations, and outputs
|
||||
- Timestamps and execution metadata
|
||||
#### `reset()`
|
||||
|
||||
##### `add_mcp(config_path: str) -> None`
|
||||
|
||||
Add Model Context Protocol (MCP) tools to extend agent capabilities.
|
||||
Reset agent state and clear conversation history.
|
||||
|
||||
```python
|
||||
agent.add_mcp(config_path='./mcp_tools_config.json')
|
||||
agent.reset()
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `config_path` (str, required): Path to MCP configuration JSON file
|
||||
Use when starting a new independent task to clear previous context.
|
||||
|
||||
**MCP Configuration Format:**
|
||||
```json
|
||||
{
|
||||
"tools": [
|
||||
{
|
||||
"name": "tool_name",
|
||||
"endpoint": "http://localhost:8000/tool",
|
||||
"description": "Tool description for LLM",
|
||||
"parameters": {
|
||||
"param1": "string",
|
||||
"param2": "integer"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
**Example:**
|
||||
```python
|
||||
# Task 1
|
||||
agent.go("Analyze dataset A")
|
||||
agent.save_conversation_history("task1.pdf")
|
||||
|
||||
# Reset for fresh context
|
||||
agent.reset()
|
||||
|
||||
# Task 2 - independent of Task 1
|
||||
agent.go("Analyze dataset B")
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
- Connect to laboratory information systems
|
||||
- Integrate proprietary databases
|
||||
- Access specialized computational resources
|
||||
- Link to institutional data repositories
|
||||
### Configuration via default_config
|
||||
|
||||
## Configuration
|
||||
|
||||
### default_config
|
||||
|
||||
Global configuration object for Biomni settings.
|
||||
Global configuration parameters accessible via `biomni.config.default_config`.
|
||||
|
||||
```python
|
||||
from biomni.config import default_config
|
||||
```
|
||||
|
||||
#### Attributes
|
||||
|
||||
##### `llm: str`
|
||||
|
||||
Default LLM model identifier for all agent instances.
|
||||
|
||||
```python
|
||||
# LLM Configuration
|
||||
default_config.llm = "claude-sonnet-4-20250514"
|
||||
```
|
||||
default_config.llm_temperature = 0.7
|
||||
|
||||
**Supported Models:**
|
||||
|
||||
**Anthropic:**
|
||||
- `claude-sonnet-4-20250514` (Recommended)
|
||||
- `claude-opus-4-20250514`
|
||||
- `claude-3-5-sonnet-20241022`
|
||||
- `claude-3-opus-20240229`
|
||||
|
||||
**OpenAI:**
|
||||
- `gpt-4o`
|
||||
- `gpt-4`
|
||||
- `gpt-4-turbo`
|
||||
- `gpt-3.5-turbo`
|
||||
|
||||
**Azure OpenAI:**
|
||||
- `azure/gpt-4`
|
||||
- `azure/<deployment-name>`
|
||||
|
||||
**Google Gemini:**
|
||||
- `gemini/gemini-pro`
|
||||
- `gemini/gemini-1.5-pro`
|
||||
|
||||
**Groq:**
|
||||
- `groq/llama-3.1-70b-versatile`
|
||||
- `groq/mixtral-8x7b-32768`
|
||||
|
||||
**Ollama (Local):**
|
||||
- `ollama/llama3`
|
||||
- `ollama/mistral`
|
||||
- `ollama/<model-name>`
|
||||
|
||||
**AWS Bedrock:**
|
||||
- `bedrock/anthropic.claude-v2`
|
||||
- `bedrock/anthropic.claude-3-sonnet`
|
||||
|
||||
**Custom/Biomni-R0:**
|
||||
- `openai/biomni-r0` (requires local SGLang deployment)
|
||||
|
||||
##### `timeout_seconds: int`
|
||||
|
||||
Default timeout for agent operations in seconds.
|
||||
|
||||
```python
|
||||
# Execution Parameters
|
||||
default_config.timeout_seconds = 1200 # 20 minutes
|
||||
default_config.max_iterations = 50 # Max reasoning loops
|
||||
default_config.max_tokens = 4096 # Max tokens per LLM call
|
||||
|
||||
# Code Execution
|
||||
default_config.enable_code_execution = True
|
||||
default_config.sandbox_mode = False # Enable for restricted execution
|
||||
|
||||
# Data and Caching
|
||||
default_config.data_cache_dir = "./biomni_cache"
|
||||
default_config.enable_caching = True
|
||||
```
|
||||
|
||||
**Recommended Values:**
|
||||
- Simple tasks (QC, basic analysis): 300-600 seconds
|
||||
- Medium tasks (differential expression, clustering): 600-1200 seconds
|
||||
- Complex tasks (full pipelines, ML models): 1200-3600 seconds
|
||||
- Very complex tasks: 3600+ seconds
|
||||
**Key Parameters:**
|
||||
|
||||
##### `data_path: str`
|
||||
- **`timeout_seconds`** (int, default=1200) - Maximum time for task execution. Increase for complex analyses.
|
||||
|
||||
Default path to biomedical knowledge base.
|
||||
- **`max_iterations`** (int, default=50) - Maximum agent reasoning loops. Prevents infinite loops.
|
||||
|
||||
- **`enable_code_execution`** (bool, default=True) - Allow agent to execute generated code. Disable for code generation only.
|
||||
|
||||
- **`sandbox_mode`** (bool, default=False) - Enable sandboxed code execution (requires additional setup).
|
||||
|
||||
## BiomniEval1 Evaluation Framework
|
||||
|
||||
Framework for benchmarking agent performance on biomedical tasks.
|
||||
|
||||
### Initialization
|
||||
|
||||
```python
|
||||
default_config.data_path = "/path/to/biomni/data"
|
||||
from biomni.eval import BiomniEval1
|
||||
|
||||
evaluator = BiomniEval1(
|
||||
dataset_path: str = None, # Path to evaluation dataset
|
||||
metrics: list = None # Evaluation metrics to compute
|
||||
)
|
||||
```
|
||||
|
||||
**Storage Requirements:**
|
||||
- Initial download: ~11GB
|
||||
- Extracted size: ~15GB
|
||||
- Additional working space: ~5-10GB recommended
|
||||
|
||||
##### `api_base: str`
|
||||
|
||||
Custom API endpoint for LLM providers (advanced usage).
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# For local Biomni-R0 deployment
|
||||
default_config.api_base = "http://localhost:30000/v1"
|
||||
|
||||
# For custom OpenAI-compatible endpoints
|
||||
default_config.api_base = "https://your-endpoint.com/v1"
|
||||
evaluator = BiomniEval1()
|
||||
```
|
||||
|
||||
##### `max_retries: int`
|
||||
### Methods
|
||||
|
||||
Number of retry attempts for failed operations.
|
||||
#### `evaluate(task_type: str, instance_id: str, answer: str) -> float`
|
||||
|
||||
Evaluate agent answer against ground truth.
|
||||
|
||||
```python
|
||||
default_config.max_retries = 3
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `reset() -> None`
|
||||
|
||||
Reset all configuration values to system defaults.
|
||||
|
||||
```python
|
||||
default_config.reset()
|
||||
```
|
||||
|
||||
## Database Query System
|
||||
|
||||
Biomni includes a retrieval-augmented generation (RAG) system for querying the biomedical knowledge base.
|
||||
|
||||
### Query Functions
|
||||
|
||||
#### `query_genes(query: str, top_k: int = 10) -> List[Dict]`
|
||||
|
||||
Query gene information from integrated databases.
|
||||
|
||||
```python
|
||||
from biomni.database import query_genes
|
||||
|
||||
results = query_genes(
|
||||
query="genes involved in p53 pathway",
|
||||
top_k=20
|
||||
score = evaluator.evaluate(
|
||||
task_type: str, # Task category
|
||||
instance_id: str, # Specific task instance
|
||||
answer: str # Agent-generated answer
|
||||
)
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `query` (str): Natural language or gene identifier query
|
||||
- `top_k` (int): Number of results to return
|
||||
- **`task_type`** (str) - Task category: `'crispr_design'`, `'scrna_analysis'`, `'gwas_interpretation'`, `'drug_admet'`, `'clinical_diagnosis'`
|
||||
- **`instance_id`** (str) - Unique identifier for task instance from dataset
|
||||
- **`answer`** (str) - Agent's answer to evaluate
|
||||
|
||||
**Returns:** List of dictionaries containing:
|
||||
- `gene_symbol`: Official gene symbol
|
||||
- `gene_name`: Full gene name
|
||||
- `description`: Functional description
|
||||
- `pathways`: Associated biological pathways
|
||||
- `go_terms`: Gene Ontology annotations
|
||||
- `diseases`: Associated diseases
|
||||
- `similarity_score`: Relevance score (0-1)
|
||||
**Returns:**
|
||||
- **`float`** - Evaluation score (0.0 to 1.0)
|
||||
|
||||
#### `query_proteins(query: str, top_k: int = 10) -> List[Dict]`
|
||||
**Example:**
|
||||
```python
|
||||
# Generate answer
|
||||
result = agent.go("Design CRISPR screen for autophagy genes")
|
||||
|
||||
Query protein information from UniProt and other sources.
|
||||
# Evaluate
|
||||
score = evaluator.evaluate(
|
||||
task_type='crispr_design',
|
||||
instance_id='autophagy_001',
|
||||
answer=result
|
||||
)
|
||||
print(f"Score: {score:.2f}")
|
||||
```
|
||||
|
||||
#### `load_dataset() -> dict`
|
||||
|
||||
Load the Biomni-Eval1 benchmark dataset.
|
||||
|
||||
```python
|
||||
from biomni.database import query_proteins
|
||||
dataset = evaluator.load_dataset()
|
||||
```
|
||||
|
||||
results = query_proteins(
|
||||
query="kinase proteins in cell cycle",
|
||||
top_k=15
|
||||
**Returns:**
|
||||
- **`dict`** - Dictionary with task instances organized by task type
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
dataset = evaluator.load_dataset()
|
||||
|
||||
for task_type, instances in dataset.items():
|
||||
print(f"{task_type}: {len(instances)} instances")
|
||||
```
|
||||
|
||||
#### `run_benchmark(agent: A1, task_types: list = None) -> dict`
|
||||
|
||||
Run full benchmark evaluation on agent.
|
||||
|
||||
```python
|
||||
results = evaluator.run_benchmark(
|
||||
agent: A1,
|
||||
task_types: list = None # Specific task types or None for all
|
||||
)
|
||||
```
|
||||
|
||||
**Returns:** List of dictionaries with protein metadata:
|
||||
- `uniprot_id`: UniProt accession
|
||||
- `protein_name`: Protein name
|
||||
- `function`: Functional annotation
|
||||
- `domains`: Protein domains
|
||||
- `subcellular_location`: Cellular localization
|
||||
- `similarity_score`: Relevance score
|
||||
|
||||
#### `query_drugs(query: str, top_k: int = 10) -> List[Dict]`
|
||||
|
||||
Query drug and compound information.
|
||||
**Returns:**
|
||||
- **`dict`** - Results with scores, timing, and detailed metrics per task
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
from biomni.database import query_drugs
|
||||
|
||||
results = query_drugs(
|
||||
query="FDA approved cancer drugs targeting EGFR",
|
||||
top_k=10
|
||||
results = evaluator.run_benchmark(
|
||||
agent=agent,
|
||||
task_types=['crispr_design', 'scrna_analysis']
|
||||
)
|
||||
|
||||
print(f"Overall accuracy: {results['mean_score']:.2f}")
|
||||
print(f"Average time: {results['mean_time']:.1f}s")
|
||||
```
|
||||
|
||||
**Returns:** Drug information including:
|
||||
- `drug_name`: Common name
|
||||
- `drugbank_id`: DrugBank identifier
|
||||
- `indication`: Therapeutic indication
|
||||
- `mechanism`: Mechanism of action
|
||||
- `targets`: Molecular targets
|
||||
- `approval_status`: Regulatory status
|
||||
- `smiles`: Chemical structure (SMILES notation)
|
||||
## Data Lake API
|
||||
|
||||
#### `query_diseases(query: str, top_k: int = 10) -> List[Dict]`
|
||||
Access integrated biomedical databases programmatically.
|
||||
|
||||
Query disease information from clinical databases.
|
||||
### Gene Database Queries
|
||||
|
||||
```python
|
||||
from biomni.database import query_diseases
|
||||
from biomni.data import GeneDB
|
||||
|
||||
results = query_diseases(
|
||||
query="autoimmune diseases affecting joints",
|
||||
top_k=10
|
||||
)
|
||||
gene_db = GeneDB(path='./biomni_data')
|
||||
|
||||
# Query gene information
|
||||
gene_info = gene_db.get_gene('BRCA1')
|
||||
# Returns: {'symbol': 'BRCA1', 'name': '...', 'function': '...', ...}
|
||||
|
||||
# Search genes by pathway
|
||||
pathway_genes = gene_db.search_by_pathway('DNA repair')
|
||||
# Returns: List of gene symbols in pathway
|
||||
|
||||
# Get gene interactions
|
||||
interactions = gene_db.get_interactions('TP53')
|
||||
# Returns: List of interacting genes with interaction types
|
||||
```
|
||||
|
||||
**Returns:** Disease data:
|
||||
- `disease_name`: Standard disease name
|
||||
- `disease_id`: Ontology identifier
|
||||
- `symptoms`: Clinical manifestations
|
||||
- `associated_genes`: Genetic associations
|
||||
- `prevalence`: Epidemiological data
|
||||
|
||||
#### `query_pathways(query: str, top_k: int = 10) -> List[Dict]`
|
||||
|
||||
Query biological pathways from KEGG, Reactome, and other sources.
|
||||
### Protein Structure Access
|
||||
|
||||
```python
|
||||
from biomni.database import query_pathways
|
||||
from biomni.data import ProteinDB
|
||||
|
||||
results = query_pathways(
|
||||
query="immune response signaling pathways",
|
||||
top_k=15
|
||||
)
|
||||
protein_db = ProteinDB(path='./biomni_data')
|
||||
|
||||
# Get AlphaFold structure
|
||||
structure = protein_db.get_structure('P38398') # BRCA1 UniProt ID
|
||||
# Returns: Path to PDB file or structure object
|
||||
|
||||
# Search PDB database
|
||||
pdb_entries = protein_db.search_pdb('kinase', resolution_max=2.5)
|
||||
# Returns: List of PDB IDs matching criteria
|
||||
```
|
||||
|
||||
**Returns:** Pathway information:
|
||||
- `pathway_name`: Pathway name
|
||||
- `pathway_id`: Database identifier
|
||||
- `genes`: Genes in pathway
|
||||
- `description`: Functional description
|
||||
- `source`: Database source (KEGG, Reactome, etc.)
|
||||
|
||||
## Data Structures
|
||||
|
||||
### TaskResult
|
||||
|
||||
Result object returned by complex agent operations.
|
||||
### Clinical Data Access
|
||||
|
||||
```python
|
||||
class TaskResult:
|
||||
success: bool # Whether task completed successfully
|
||||
output: Any # Task output (varies by task)
|
||||
code: str # Generated code
|
||||
execution_time: float # Execution time in seconds
|
||||
error: Optional[str] # Error message if failed
|
||||
metadata: Dict # Additional metadata
|
||||
from biomni.data import ClinicalDB
|
||||
|
||||
clinical_db = ClinicalDB(path='./biomni_data')
|
||||
|
||||
# Query ClinVar variants
|
||||
variant_info = clinical_db.get_variant('rs429358') # APOE4 variant
|
||||
# Returns: {'significance': '...', 'disease': '...', 'frequency': ...}
|
||||
|
||||
# Search OMIM for disease
|
||||
disease_info = clinical_db.search_omim('Alzheimer')
|
||||
# Returns: List of OMIM entries with gene associations
|
||||
```
|
||||
|
||||
### BiomedicalEntity
|
||||
|
||||
Base class for biomedical entities in the knowledge base.
|
||||
### Literature Search
|
||||
|
||||
```python
|
||||
class BiomedicalEntity:
|
||||
entity_id: str # Unique identifier
|
||||
entity_type: str # Type (gene, protein, drug, etc.)
|
||||
name: str # Entity name
|
||||
description: str # Description
|
||||
attributes: Dict # Additional attributes
|
||||
references: List[str] # Literature references
|
||||
from biomni.data import LiteratureDB
|
||||
|
||||
lit_db = LiteratureDB(path='./biomni_data')
|
||||
|
||||
# Search PubMed abstracts
|
||||
papers = lit_db.search('CRISPR screening cancer', max_results=10)
|
||||
# Returns: List of paper dictionaries with titles, abstracts, PMIDs
|
||||
|
||||
# Get citations for paper
|
||||
citations = lit_db.get_citations('PMID:12345678')
|
||||
# Returns: List of citing papers
|
||||
```
|
||||
|
||||
## Utility Functions
|
||||
## MCP Server Integration
|
||||
|
||||
### `download_data(path: str, force: bool = False) -> None`
|
||||
Extend biomni with external tools via Model Context Protocol.
|
||||
|
||||
Manually download or update the biomedical knowledge base.
|
||||
### Configuration Format
|
||||
|
||||
Create `.biomni/mcp_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"servers": {
|
||||
"fda-drugs": {
|
||||
"command": "python",
|
||||
"args": ["-m", "mcp_server_fda"],
|
||||
"env": {
|
||||
"FDA_API_KEY": "${FDA_API_KEY}"
|
||||
}
|
||||
},
|
||||
"web-search": {
|
||||
"command": "npx",
|
||||
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
|
||||
"env": {
|
||||
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Using MCP Tools in Tasks
|
||||
|
||||
```python
|
||||
from biomni.utils import download_data
|
||||
|
||||
download_data(
|
||||
# Initialize with MCP config
|
||||
agent = A1(
|
||||
path='./data',
|
||||
force=True # Force re-download
|
||||
llm='claude-sonnet-4-20250514',
|
||||
mcp_config='./.biomni/mcp_config.json'
|
||||
)
|
||||
```
|
||||
|
||||
### `validate_environment() -> Dict[str, bool]`
|
||||
|
||||
Check if the environment is properly configured.
|
||||
|
||||
```python
|
||||
from biomni.utils import validate_environment
|
||||
|
||||
status = validate_environment()
|
||||
# Returns: {
|
||||
# 'conda_env': True,
|
||||
# 'api_keys': True,
|
||||
# 'data_available': True,
|
||||
# 'dependencies': True
|
||||
# }
|
||||
```
|
||||
|
||||
### `list_available_models() -> List[str]`
|
||||
|
||||
Get a list of available LLM models based on configured API keys.
|
||||
|
||||
```python
|
||||
from biomni.utils import list_available_models
|
||||
|
||||
models = list_available_models()
|
||||
# Returns: ['claude-sonnet-4-20250514', 'gpt-4o', ...]
|
||||
# Agent can now use MCP tools automatically
|
||||
result = agent.go("""
|
||||
Search for FDA-approved drugs targeting EGFR.
|
||||
Get their approval dates and indications.
|
||||
""")
|
||||
# Agent uses fda-drugs MCP server automatically
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Exceptions
|
||||
|
||||
#### `BiomniConfigError`
|
||||
|
||||
Raised when configuration is invalid or incomplete.
|
||||
Common exceptions and handling strategies:
|
||||
|
||||
```python
|
||||
from biomni.exceptions import BiomniConfigError
|
||||
from biomni.exceptions import (
|
||||
BiomniException,
|
||||
LLMError,
|
||||
CodeExecutionError,
|
||||
DataNotFoundError,
|
||||
TimeoutError
|
||||
)
|
||||
|
||||
try:
|
||||
agent = A1(path='./data')
|
||||
except BiomniConfigError as e:
|
||||
print(f"Configuration error: {e}")
|
||||
```
|
||||
|
||||
#### `BiomniExecutionError`
|
||||
|
||||
Raised when code generation or execution fails.
|
||||
|
||||
```python
|
||||
from biomni.exceptions import BiomniExecutionError
|
||||
|
||||
try:
|
||||
agent.go("invalid task")
|
||||
except BiomniExecutionError as e:
|
||||
print(f"Execution failed: {e}")
|
||||
# Access failed code: e.code
|
||||
# Access error details: e.details
|
||||
```
|
||||
|
||||
#### `BiomniDataError`
|
||||
|
||||
Raised when knowledge base or data access fails.
|
||||
|
||||
```python
|
||||
from biomni.exceptions import BiomniDataError
|
||||
|
||||
try:
|
||||
results = query_genes("unknown query format")
|
||||
except BiomniDataError as e:
|
||||
print(f"Data access error: {e}")
|
||||
```
|
||||
|
||||
#### `BiomniTimeoutError`
|
||||
|
||||
Raised when operations exceed timeout limit.
|
||||
|
||||
```python
|
||||
from biomni.exceptions import BiomniTimeoutError
|
||||
|
||||
try:
|
||||
agent.go("very complex long-running task")
|
||||
except BiomniTimeoutError as e:
|
||||
print(f"Task timed out after {e.duration} seconds")
|
||||
# Partial results may be available: e.partial_results
|
||||
result = agent.go("Complex biomedical task")
|
||||
except TimeoutError:
|
||||
# Task exceeded timeout_seconds
|
||||
print("Task timed out. Consider increasing timeout.")
|
||||
default_config.timeout_seconds = 3600
|
||||
except CodeExecutionError as e:
|
||||
# Generated code failed to execute
|
||||
print(f"Code execution error: {e}")
|
||||
# Review generated code in conversation history
|
||||
except DataNotFoundError:
|
||||
# Required data not in data lake
|
||||
print("Data not found. Ensure data lake is downloaded.")
|
||||
except LLMError as e:
|
||||
# LLM API error
|
||||
print(f"LLM error: {e}")
|
||||
# Check API keys and rate limits
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Efficient Knowledge Retrieval
|
||||
### Efficient API Usage
|
||||
|
||||
Pre-query databases for relevant context before complex tasks:
|
||||
1. **Reuse agent instances** for related tasks to maintain context
|
||||
2. **Set appropriate timeouts** based on task complexity
|
||||
3. **Use caching** to avoid redundant data downloads
|
||||
4. **Monitor iterations** to detect reasoning loops early
|
||||
|
||||
### Production Deployment
|
||||
|
||||
```python
|
||||
from biomni.database import query_genes, query_pathways
|
||||
from biomni.agent import A1
|
||||
from biomni.config import default_config
|
||||
import logging
|
||||
|
||||
# Gather relevant biological context first
|
||||
genes = query_genes("cell cycle genes", top_k=50)
|
||||
pathways = query_pathways("cell cycle regulation", top_k=20)
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
# Then execute task with enriched context
|
||||
agent.go(f"""
|
||||
Analyze the cell cycle progression in this dataset.
|
||||
Focus on these genes: {[g['gene_symbol'] for g in genes]}
|
||||
Consider these pathways: {[p['pathway_name'] for p in pathways]}
|
||||
""")
|
||||
```
|
||||
# Production settings
|
||||
default_config.timeout_seconds = 3600
|
||||
default_config.max_iterations = 100
|
||||
default_config.sandbox_mode = True # Enable sandboxing
|
||||
|
||||
### Error Recovery
|
||||
|
||||
Implement robust error handling for production workflows:
|
||||
|
||||
```python
|
||||
from biomni.exceptions import BiomniExecutionError, BiomniTimeoutError
|
||||
|
||||
max_attempts = 3
|
||||
for attempt in range(max_attempts):
|
||||
try:
|
||||
agent.go("complex biomedical task")
|
||||
break
|
||||
except BiomniTimeoutError:
|
||||
# Increase timeout and retry
|
||||
default_config.timeout_seconds *= 2
|
||||
print(f"Timeout, retrying with {default_config.timeout_seconds}s timeout")
|
||||
except BiomniExecutionError as e:
|
||||
# Refine task based on error
|
||||
print(f"Execution failed: {e}, refining task...")
|
||||
# Optionally modify task description
|
||||
else:
|
||||
print("Task failed after max attempts")
|
||||
# Initialize with error handling
|
||||
try:
|
||||
agent = A1(path='/data/biomni', llm='claude-sonnet-4-20250514')
|
||||
result = agent.go(task_query)
|
||||
agent.save_conversation_history(f'reports/{task_id}.pdf')
|
||||
except Exception as e:
|
||||
logging.error(f"Task {task_id} failed: {e}")
|
||||
# Handle failure appropriately
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
|
||||
For large-scale analyses, manage memory explicitly:
|
||||
For large-scale analyses:
|
||||
|
||||
```python
|
||||
import gc
|
||||
|
||||
# Process datasets in chunks
|
||||
for chunk_id in range(num_chunks):
|
||||
agent.go(f"Process data chunk {chunk_id} located at data/chunk_{chunk_id}.h5ad")
|
||||
chunk_results = []
|
||||
for chunk in dataset_chunks:
|
||||
agent.reset() # Clear memory between chunks
|
||||
result = agent.go(f"Analyze chunk: {chunk}")
|
||||
chunk_results.append(result)
|
||||
|
||||
# Force garbage collection between chunks
|
||||
gc.collect()
|
||||
|
||||
# Save intermediate results
|
||||
agent.save_conversation_history(f"./reports/chunk_{chunk_id}.pdf")
|
||||
```
|
||||
|
||||
### Reproducibility
|
||||
|
||||
Ensure reproducible analyses by:
|
||||
|
||||
1. **Fixing random seeds:**
|
||||
```python
|
||||
agent.go("Set random seed to 42 for all analyses, then perform clustering...")
|
||||
```
|
||||
|
||||
2. **Logging configuration:**
|
||||
```python
|
||||
import json
|
||||
config_log = {
|
||||
'llm': default_config.llm,
|
||||
'timeout': default_config.timeout_seconds,
|
||||
'data_path': default_config.data_path,
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}
|
||||
with open('config_log.json', 'w') as f:
|
||||
json.dump(config_log, f, indent=2)
|
||||
```
|
||||
|
||||
3. **Saving execution traces:**
|
||||
```python
|
||||
# Always save detailed reports
|
||||
agent.save_conversation_history('./reports/full_analysis.pdf')
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Model Selection Strategy
|
||||
|
||||
Choose models based on task characteristics:
|
||||
|
||||
```python
|
||||
# For exploratory, simple tasks
|
||||
default_config.llm = "gpt-3.5-turbo" # Fast, cost-effective
|
||||
|
||||
# For standard biomedical analyses
|
||||
default_config.llm = "claude-sonnet-4-20250514" # Recommended
|
||||
|
||||
# For complex reasoning and hypothesis generation
|
||||
default_config.llm = "claude-opus-4-20250514" # Highest quality
|
||||
|
||||
# For specialized biological reasoning
|
||||
default_config.llm = "openai/biomni-r0" # Requires local deployment
|
||||
```
|
||||
|
||||
### Timeout Tuning
|
||||
|
||||
Set appropriate timeouts based on task complexity:
|
||||
|
||||
```python
|
||||
# Quick queries and simple analyses
|
||||
agent = A1(path='./data', timeout=300)
|
||||
|
||||
# Standard workflows
|
||||
agent = A1(path='./data', timeout=1200)
|
||||
|
||||
# Full pipelines with ML training
|
||||
agent = A1(path='./data', timeout=3600)
|
||||
```
|
||||
|
||||
### Caching and Reuse
|
||||
|
||||
Reuse agent instances for multiple related tasks:
|
||||
|
||||
```python
|
||||
# Create agent once
|
||||
agent = A1(path='./data', llm='claude-sonnet-4-20250514')
|
||||
|
||||
# Execute multiple related tasks
|
||||
tasks = [
|
||||
"Load and QC the scRNA-seq dataset",
|
||||
"Perform clustering with resolution 0.5",
|
||||
"Identify marker genes for each cluster",
|
||||
"Annotate cell types based on markers"
|
||||
]
|
||||
|
||||
for task in tasks:
|
||||
agent.go(task)
|
||||
|
||||
# Save complete workflow
|
||||
agent.save_conversation_history('./reports/full_workflow.pdf')
|
||||
# Combine results
|
||||
final_result = combine_results(chunk_results)
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user