Add support for both geniml and gtars

This commit is contained in:
Timothy Kassis
2025-11-04 10:22:03 -08:00
parent 4ad4f9970f
commit 1225ddecf1
16 changed files with 2649 additions and 6 deletions

View File

@@ -7,7 +7,7 @@
},
"metadata": {
"description": "Claude scientific skills from K-Dense Inc",
"version": "1.69.0"
"version": "1.70.0"
},
"plugins": [
{
@@ -35,7 +35,9 @@
"./scientific-packages/esm",
"./scientific-packages/etetoolkit",
"./scientific-packages/flowio",
"./scientific-packages/geniml",
"./scientific-packages/gget",
"./scientific-packages/gtars",
"./scientific-packages/hypogenic",
"./scientific-packages/histolab",
"./scientific-packages/lamindb",

View File

@@ -1,7 +1,7 @@
# Claude Scientific Skills
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE.md)
[![Skills](https://img.shields.io/badge/Skills-114-brightgreen.svg)](#what-s-included)
[![Skills](https://img.shields.io/badge/Skills-116-brightgreen.svg)](#what-s-included)
[![Equivalent Tools](https://img.shields.io/badge/Equivalent_Tools-1002-blue.svg)](#what-s-included)
A comprehensive collection of ready-to-use scientific skills for Claude, curated by the K-Dense team.
@@ -47,7 +47,7 @@ These skills enable Claude to work with specialized scientific libraries and dat
| Category | Count | Description |
|----------|-------|-------------|
| 📊 **Scientific Databases** | 26 | PubMed, PubChem, UniProt, ChEMBL, COSMIC, DrugBank, AlphaFold DB, bioRxiv, and more |
| 🔬 **Scientific Packages** | 65 | BioPython, RDKit, PyTorch, Scanpy, scvi-tools, ESM, NetworkX, SimPy, pydicom, PyHealth, Data Commons, histolab, LaminDB, PathML, PyLabRobot, HypoGeniC, MarkItDown, PufferLib, Stable Baselines3, Vaex, Denario, and more |
| 🔬 **Scientific Packages** | 67 | BioPython, RDKit, PyTorch, Scanpy, scvi-tools, ESM, NetworkX, SimPy, pydicom, PyHealth, Data Commons, histolab, LaminDB, PathML, PyLabRobot, HypoGeniC, MarkItDown, PufferLib, Stable Baselines3, Vaex, Denario, geniml, gtars, and more |
| 🔌 **Scientific Integrations** | 7 | Benchling, DNAnexus, Opentrons, LabArchives, LatchBio, OMERO, Protocols.io |
| 🛠️ **Scientific Helpers** | 2 | Context initialization and resource detection utilities |
| 📚 **Documented Workflows** | 122 | Ready-to-use examples and reference materials |
@@ -313,15 +313,15 @@ results, conclusions and providing recommendations."
---
### 🔬 Scientific Packages
**64 specialized Python packages** organized by domain.
**67 specialized Python packages** organized by domain.
📖 **[Full Package Documentation →](docs/scientific-packages.md)**
<details>
<summary><strong>Bioinformatics & Genomics (12 packages)</strong></summary>
<summary><strong>Bioinformatics & Genomics (14 packages)</strong></summary>
- AnnData, Arboreto, BioPython, BioServices, Cellxgene Census
- deepTools, FlowIO, gget, pysam, PyDESeq2, Scanpy, scvi-tools
- deepTools, FlowIO, gget, geniml, gtars, pysam, PyDESeq2, Scanpy, scvi-tools
</details>

View File

@@ -7,6 +7,8 @@
- **BioServices** - Programmatic access to 40+ biological web services (KEGG, UniProt, ChEBI, ChEMBL)
- **Cellxgene Census** - Query and analyze large-scale single-cell RNA-seq data
- **gget** - Efficient genomic database queries (Ensembl, UniProt, NCBI, PDB, COSMIC)
- **geniml** - Genomic interval machine learning toolkit providing unsupervised methods for building ML models on BED files. Key capabilities include Region2Vec (word2vec-style embeddings of genomic regions and region sets using tokenization and neural language modeling), BEDspace (joint embeddings of regions and metadata labels using StarSpace for cross-modal queries), scEmbed (Region2Vec applied to single-cell ATAC-seq data generating cell-level embeddings for clustering and annotation with scanpy integration), consensus peak building (four statistical methods CC/CCF/ML/HMM for creating reference universes from BED collections), and comprehensive utilities (BBClient for BED caching, BEDshift for genomic randomization preserving context, evaluation metrics for embedding quality, Text2BedNN for neural search backends). Part of BEDbase ecosystem. Supports Python API and CLI workflows, pre-trained models on Hugging Face, and integration with gtars for tokenization. Use cases: region similarity searches, dimension reduction of chromatin accessibility data, scATAC-seq clustering and cell-type annotation, metadata-aware genomic queries, universe construction for standardized references, and any ML task requiring genomic region feature vectors
- **gtars** - High-performance Rust toolkit for genomic interval analysis providing specialized tools for overlap detection using IGD (Integrated Genome Database) indexing, coverage track generation (uniwig module for WIG/BigWig formats), genomic tokenization for machine learning applications (TreeTokenizer for deep learning models), reference sequence management (refget protocol compliance), fragment processing for single-cell genomics (barcode-based splitting and cluster analysis), and fragment scoring against reference datasets. Offers Python bindings with NumPy integration, command-line tools (gtars-cli), and Rust library. Key modules include: tokenizers (convert genomic regions to ML tokens), overlaprs (efficient overlap computation), uniwig (ATAC-seq/ChIP-seq/RNA-seq coverage profiles), refget (GA4GH-compliant sequence digests), bbcache (BEDbase.org integration), scoring (fragment enrichment metrics), and fragsplit (single-cell fragment manipulation). Supports parallel processing, memory-mapped files, streaming for large datasets, and serves as foundation for geniml genomic ML package. Ideal for genomic ML preprocessing, regulatory element analysis, variant annotation, chromatin accessibility profiling, and computational genomics workflows
- **pysam** - Read, write, and manipulate genomic data files (SAM/BAM/CRAM alignments, VCF/BCF variants, FASTA/FASTQ sequences) with pileup analysis, coverage calculations, and bioinformatics workflows
- **PyDESeq2** - Differential gene expression analysis for bulk RNA-seq data
- **Scanpy** - Single-cell RNA-seq analysis with clustering, marker genes, and UMAP/t-SNE visualization

View File

@@ -0,0 +1,312 @@
---
name: geniml
description: This skill should be used when working with genomic interval data (BED files) for machine learning tasks. Use for training region embeddings (Region2Vec, BEDspace), single-cell ATAC-seq analysis (scEmbed), building consensus peaks (universes), or any ML-based analysis of genomic regions. Applies to BED file collections, scATAC-seq data, chromatin accessibility datasets, and region-based genomic feature learning.
---
# Geniml: Genomic Interval Machine Learning
## Overview
Geniml is a Python package for building machine learning models on genomic interval data from BED files. It provides unsupervised methods for learning embeddings of genomic regions, single cells, and metadata labels, enabling similarity searches, clustering, and downstream ML tasks.
## Installation
Install geniml using uv:
```bash
uv pip install geniml
```
For ML dependencies (PyTorch, etc.):
```bash
uv pip install 'geniml[ml]'
```
Development version from GitHub:
```bash
uv pip install git+https://github.com/databio/geniml.git
```
## Core Capabilities
Geniml provides five primary capabilities, each detailed in dedicated reference files:
### 1. Region2Vec: Genomic Region Embeddings
Train unsupervised embeddings of genomic regions using word2vec-style learning.
**Use for:** Dimensionality reduction of BED files, region similarity analysis, feature vectors for downstream ML.
**Workflow:**
1. Tokenize BED files using a universe reference
2. Train Region2Vec model on tokens
3. Generate embeddings for regions
**Reference:** See `references/region2vec.md` for detailed workflow, parameters, and examples.
### 2. BEDspace: Joint Region and Metadata Embeddings
Train shared embeddings for region sets and metadata labels using StarSpace.
**Use for:** Metadata-aware searches, cross-modal queries (region→label or label→region), joint analysis of genomic content and experimental conditions.
**Workflow:**
1. Preprocess regions and metadata
2. Train BEDspace model
3. Compute distances
4. Query across regions and labels
**Reference:** See `references/bedspace.md` for detailed workflow, search types, and examples.
### 3. scEmbed: Single-Cell Chromatin Accessibility Embeddings
Train Region2Vec models on single-cell ATAC-seq data for cell-level embeddings.
**Use for:** scATAC-seq clustering, cell-type annotation, dimensionality reduction of single cells, integration with scanpy workflows.
**Workflow:**
1. Prepare AnnData with peak coordinates
2. Pre-tokenize cells
3. Train scEmbed model
4. Generate cell embeddings
5. Cluster and visualize with scanpy
**Reference:** See `references/scembed.md` for detailed workflow, parameters, and examples.
### 4. Consensus Peaks: Universe Building
Build reference peak sets (universes) from BED file collections using multiple statistical methods.
**Use for:** Creating tokenization references, standardizing regions across datasets, defining consensus features with statistical rigor.
**Workflow:**
1. Combine BED files
2. Generate coverage tracks
3. Build universe using CC, CCF, ML, or HMM method
**Methods:**
- **CC (Coverage Cutoff)**: Simple threshold-based
- **CCF (Coverage Cutoff Flexible)**: Confidence intervals for boundaries
- **ML (Maximum Likelihood)**: Probabilistic modeling of positions
- **HMM (Hidden Markov Model)**: Complex state modeling
**Reference:** See `references/consensus_peaks.md` for method comparison, parameters, and examples.
### 5. Utilities: Supporting Tools
Additional tools for caching, randomization, evaluation, and search.
**Available utilities:**
- **BBClient**: BED file caching for repeated access
- **BEDshift**: Randomization preserving genomic context
- **Evaluation**: Metrics for embedding quality (silhouette, Davies-Bouldin, etc.)
- **Tokenization**: Region tokenization utilities (hard, soft, universe-based)
- **Text2BedNN**: Neural search backends for genomic queries
**Reference:** See `references/utilities.md` for detailed usage of each utility.
## Common Workflows
### Basic Region Embedding Pipeline
```python
from geniml.tokenization import hard_tokenization
from geniml.region2vec import region2vec
from geniml.evaluation import evaluate_embeddings
# Step 1: Tokenize BED files
hard_tokenization(
src_folder='bed_files/',
dst_folder='tokens/',
universe_file='universe.bed',
p_value_threshold=1e-9
)
# Step 2: Train Region2Vec
region2vec(
token_folder='tokens/',
save_dir='model/',
num_shufflings=1000,
embedding_dim=100
)
# Step 3: Evaluate
metrics = evaluate_embeddings(
embeddings_file='model/embeddings.npy',
labels_file='metadata.csv'
)
```
### scATAC-seq Analysis Pipeline
```python
import scanpy as sc
from geniml.scembed import ScEmbed
from geniml.io import tokenize_cells
# Step 1: Load data
adata = sc.read_h5ad('scatac_data.h5ad')
# Step 2: Tokenize cells
tokenize_cells(
adata='scatac_data.h5ad',
universe_file='universe.bed',
output='tokens.parquet'
)
# Step 3: Train scEmbed
model = ScEmbed(embedding_dim=100)
model.train(dataset='tokens.parquet', epochs=100)
# Step 4: Generate embeddings
embeddings = model.encode(adata)
adata.obsm['scembed_X'] = embeddings
# Step 5: Cluster with scanpy
sc.pp.neighbors(adata, use_rep='scembed_X')
sc.tl.leiden(adata)
sc.tl.umap(adata)
```
### Universe Building and Evaluation
```bash
# Generate coverage
cat bed_files/*.bed > combined.bed
uniwig -m 25 combined.bed chrom.sizes coverage/
# Build universe with coverage cutoff
geniml universe build cc \
--coverage-folder coverage/ \
--output-file universe.bed \
--cutoff 5 \
--merge 100 \
--filter-size 50
# Evaluate universe quality
geniml universe evaluate \
--universe universe.bed \
--coverage-folder coverage/ \
--bed-folder bed_files/
```
## CLI Reference
Geniml provides command-line interfaces for major operations:
```bash
# Region2Vec training
geniml region2vec --token-folder tokens/ --save-dir model/ --num-shuffle 1000
# BEDspace preprocessing
geniml bedspace preprocess --input regions/ --metadata labels.csv --universe universe.bed
# BEDspace training
geniml bedspace train --input preprocessed.txt --output model/ --dim 100
# BEDspace search
geniml bedspace search -t r2l -d distances.pkl -q query.bed -n 10
# Universe building
geniml universe build cc --coverage-folder coverage/ --output universe.bed --cutoff 5
# BEDshift randomization
geniml bedshift --input peaks.bed --genome hg38 --preserve-chrom --iterations 100
```
## When to Use Which Tool
**Use Region2Vec when:**
- Working with bulk genomic data (ChIP-seq, ATAC-seq, etc.)
- Need unsupervised embeddings without metadata
- Comparing region sets across experiments
- Building features for downstream supervised learning
**Use BEDspace when:**
- Metadata labels available (cell types, tissues, conditions)
- Need to query regions by metadata or vice versa
- Want joint embedding space for regions and labels
- Building searchable genomic databases
**Use scEmbed when:**
- Analyzing single-cell ATAC-seq data
- Clustering cells by chromatin accessibility
- Annotating cell types from scATAC-seq
- Integration with scanpy is desired
**Use Universe Building when:**
- Need reference peak sets for tokenization
- Combining multiple experiments into consensus
- Want statistically rigorous region definitions
- Building standard references for a project
**Use Utilities when:**
- Need to cache remote BED files (BBClient)
- Generating null models for statistics (BEDshift)
- Evaluating embedding quality (Evaluation)
- Building search interfaces (Text2BedNN)
## Best Practices
### General Guidelines
- **Universe quality is critical**: Invest time in building comprehensive, well-constructed universes
- **Tokenization validation**: Check coverage (>80% ideal) before training
- **Parameter tuning**: Experiment with embedding dimensions, learning rates, and training epochs
- **Evaluation**: Always validate embeddings with multiple metrics and visualizations
- **Documentation**: Record parameters and random seeds for reproducibility
### Performance Considerations
- **Pre-tokenization**: For scEmbed, always pre-tokenize cells for faster training
- **Memory management**: Large datasets may require batch processing or downsampling
- **Computational resources**: ML/HMM universe methods are computationally intensive
- **Model caching**: Use BBClient to avoid repeated downloads
### Integration Patterns
- **With scanpy**: scEmbed embeddings integrate seamlessly as `adata.obsm` entries
- **With BEDbase**: Use BBClient for accessing remote BED repositories
- **With Hugging Face**: Export trained models for sharing and reproducibility
- **With R**: Use reticulate for R integration (see utilities reference)
## Related Projects
Geniml is part of the BEDbase ecosystem:
- **BEDbase**: Unified platform for genomic regions
- **BEDboss**: Processing pipeline for BED files
- **Gtars**: Genomic tools and utilities
- **BBClient**: Client for BEDbase repositories
## Additional Resources
- **Documentation**: https://docs.bedbase.org/geniml/
- **GitHub**: https://github.com/databio/geniml
- **Pre-trained models**: Available on Hugging Face (databio organization)
- **Publications**: Cited in documentation for methodological details
## Troubleshooting
**"Tokenization coverage too low":**
- Check universe quality and completeness
- Adjust p-value threshold (try 1e-6 instead of 1e-9)
- Ensure universe matches genome assembly
**"Training not converging":**
- Adjust learning rate (try 0.01-0.05 range)
- Increase training epochs
- Check data quality and preprocessing
**"Out of memory errors":**
- Reduce batch size for scEmbed
- Process data in chunks
- Use pre-tokenization for single-cell data
**"StarSpace not found" (BEDspace):**
- Install StarSpace separately: https://github.com/facebookresearch/StarSpace
- Set `--path-to-starspace` parameter correctly
For detailed troubleshooting and method-specific issues, consult the appropriate reference file.

View File

@@ -0,0 +1,127 @@
# BEDspace: Joint Region and Metadata Embeddings
## Overview
BEDspace applies the StarSpace model to genomic data, enabling simultaneous training of numerical embeddings for both region sets and their metadata labels in a shared low-dimensional space. This allows for rich queries across regions and metadata.
## When to Use
Use BEDspace when working with:
- Region sets with associated metadata (cell types, tissues, conditions)
- Search tasks requiring metadata-aware similarity
- Cross-modal queries (e.g., "find regions similar to label X")
- Joint analysis of genomic content and experimental conditions
## Workflow
BEDspace consists of four sequential operations:
### 1. Preprocess
Format genomic intervals and metadata for StarSpace training:
```bash
geniml bedspace preprocess \
--input /path/to/regions/ \
--metadata labels.csv \
--universe universe.bed \
--labels "cell_type,tissue" \
--output preprocessed.txt
```
**Required files:**
- **Input folder**: Directory containing BED files
- **Metadata CSV**: Must include `file_name` column matching BED filenames, plus metadata columns
- **Universe file**: Reference BED file for tokenization
- **Labels**: Comma-separated list of metadata columns to use
The preprocessing step adds `__label__` prefixes to metadata and converts regions to StarSpace-compatible format.
### 2. Train
Execute StarSpace model on preprocessed data:
```bash
geniml bedspace train \
--path-to-starspace /path/to/starspace \
--input preprocessed.txt \
--output model/ \
--dim 100 \
--epochs 50 \
--lr 0.05
```
**Key training parameters:**
- `--dim`: Embedding dimension (typical: 50-200)
- `--epochs`: Training epochs (typical: 20-100)
- `--lr`: Learning rate (typical: 0.01-0.1)
### 3. Distances
Compute distance metrics between region sets and metadata labels:
```bash
geniml bedspace distances \
--input model/ \
--metadata labels.csv \
--universe universe.bed \
--output distances.pkl
```
This step creates a distance matrix needed for similarity searches.
### 4. Search
Retrieve similar items across three scenarios:
**Region-to-Label (r2l)**: Query region set → retrieve similar metadata labels
```bash
geniml bedspace search -t r2l -d distances.pkl -q query_regions.bed -n 10
```
**Label-to-Region (l2r)**: Query metadata label → retrieve similar region sets
```bash
geniml bedspace search -t l2r -d distances.pkl -q "T_cell" -n 10
```
**Region-to-Region (r2r)**: Query region set → retrieve similar region sets
```bash
geniml bedspace search -t r2r -d distances.pkl -q query_regions.bed -n 10
```
The `-n` parameter controls the number of results returned.
## Python API
```python
from geniml.bedspace import BEDSpaceModel
# Load trained model
model = BEDSpaceModel.load('model/')
# Query similar items
results = model.search(
query="T_cell",
search_type="l2r",
top_k=10
)
```
## Best Practices
- **Metadata structure**: Ensure metadata CSV includes `file_name` column that exactly matches BED filenames (without path)
- **Label selection**: Choose informative metadata columns that capture biological variation of interest
- **Universe consistency**: Use the same universe file across preprocessing, distances, and any subsequent analyses
- **Validation**: Preprocess and check output format before investing in training
- **StarSpace installation**: Install StarSpace separately as it's an external dependency
## Output Interpretation
Search results return items ranked by similarity in the joint embedding space:
- **r2l**: Identifies metadata labels characterizing your query regions
- **l2r**: Finds region sets matching your metadata criteria
- **r2r**: Discovers region sets with similar genomic content
## Requirements
BEDspace requires StarSpace to be installed separately. Download from: https://github.com/facebookresearch/StarSpace

View File

@@ -0,0 +1,238 @@
# Consensus Peaks: Universe Building
## Overview
Geniml provides tools for building genomic "universes" — standardized reference sets of consensus peaks from collections of BED files. These universes represent genomic regions where analyzed datasets show significant coverage overlap, serving as reference vocabularies for tokenization and analysis.
## When to Use
Use consensus peak creation when:
- Building reference peak sets from multiple experiments
- Creating universe files for Region2Vec or BEDspace tokenization
- Standardizing genomic regions across a collection of datasets
- Defining regions of interest with statistical significance
## Workflow
### Step 1: Combine BED Files
Merge all BED files into a single combined file:
```bash
cat /path/to/bed/files/*.bed > combined_files.bed
```
### Step 2: Generate Coverage Tracks
Create bigWig coverage tracks using uniwig with a smoothing window:
```bash
uniwig -m 25 combined_files.bed chrom.sizes coverage/
```
**Parameters:**
- `-m 25`: Smoothing window size (25bp typical for chromatin accessibility)
- `chrom.sizes`: Chromosome sizes file for your genome
- `coverage/`: Output directory for bigWig files
The smoothing window helps reduce noise and creates more robust peak boundaries.
### Step 3: Build Universe
Use one of four methods to construct the consensus peaks:
## Universe-Building Methods
### 1. Coverage Cutoff (CC)
The simplest approach using a fixed coverage threshold:
```bash
geniml universe build cc \
--coverage-folder coverage/ \
--output-file universe_cc.bed \
--cutoff 5 \
--merge 100 \
--filter-size 50
```
**Parameters:**
- `--cutoff`: Coverage threshold (1 = union; file count = intersection)
- `--merge`: Distance for merging adjacent peaks (bp)
- `--filter-size`: Minimum peak size for inclusion (bp)
**Use when:** Simple threshold-based selection is sufficient
### 2. Coverage Cutoff Flexible (CCF)
Creates confidence intervals around likelihood cutoffs for boundaries and region cores:
```bash
geniml universe build ccf \
--coverage-folder coverage/ \
--output-file universe_ccf.bed \
--cutoff 5 \
--confidence 0.95 \
--merge 100 \
--filter-size 50
```
**Additional parameters:**
- `--confidence`: Confidence level for flexible boundaries (0-1)
**Use when:** Uncertainty in peak boundaries should be captured
### 3. Maximum Likelihood (ML)
Builds probabilistic models accounting for region start/end positions:
```bash
geniml universe build ml \
--coverage-folder coverage/ \
--output-file universe_ml.bed \
--merge 100 \
--filter-size 50 \
--model-type gaussian
```
**Parameters:**
- `--model-type`: Distribution for likelihood estimation (gaussian, poisson)
**Use when:** Statistical modeling of peak locations is important
### 4. Hidden Markov Model (HMM)
Models genomic regions as hidden states with coverage as emissions:
```bash
geniml universe build hmm \
--coverage-folder coverage/ \
--output-file universe_hmm.bed \
--states 3 \
--merge 100 \
--filter-size 50
```
**Parameters:**
- `--states`: Number of HMM hidden states (typically 2-5)
**Use when:** Complex patterns of genomic states should be captured
## Python API
```python
from geniml.universe import build_universe
# Build using coverage cutoff method
universe = build_universe(
coverage_folder='coverage/',
method='cc',
cutoff=5,
merge_distance=100,
min_size=50,
output_file='universe.bed'
)
```
## Method Comparison
| Method | Complexity | Flexibility | Computational Cost | Best For |
|--------|------------|-------------|-------------------|----------|
| CC | Low | Low | Low | Quick reference sets |
| CCF | Medium | Medium | Medium | Boundary uncertainty |
| ML | High | High | High | Statistical rigor |
| HMM | High | High | Very High | Complex patterns |
## Best Practices
### Choosing a Method
1. **Start with CC**: Quick and interpretable for initial exploration
2. **Use CCF**: When peak boundaries are uncertain or noisy
3. **Apply ML**: For publication-quality statistical analysis
4. **Deploy HMM**: When modeling complex chromatin states
### Parameter Selection
**Coverage cutoff:**
- `cutoff = 1`: Union of all peaks (most permissive)
- `cutoff = n_files`: Intersection (most stringent)
- `cutoff = 0.5 * n_files`: Moderate consensus (typical choice)
**Merge distance:**
- ATAC-seq: 100-200bp
- ChIP-seq (narrow peaks): 50-100bp
- ChIP-seq (broad peaks): 500-1000bp
**Filter size:**
- Minimum 30bp to avoid artifacts
- 50-100bp typical for most assays
- Larger for broad histone marks
### Quality Control
After building, assess universe quality:
```python
from geniml.evaluation import assess_universe
metrics = assess_universe(
universe_file='universe.bed',
coverage_folder='coverage/',
bed_files='bed_files/'
)
print(f"Number of regions: {metrics['n_regions']}")
print(f"Mean region size: {metrics['mean_size']:.1f}bp")
print(f"Coverage of input peaks: {metrics['coverage']:.1%}")
```
**Key metrics:**
- **Region count**: Should capture major features without excessive fragmentation
- **Size distribution**: Should match expected biology (e.g., ~500bp for ATAC-seq)
- **Input coverage**: Proportion of original peaks represented (typically >80%)
## Output Format
Consensus peaks are saved as BED files with three required columns:
```
chr1 1000 1500
chr1 2000 2800
chr2 500 1000
```
Additional columns may include confidence scores or state annotations depending on the method.
## Common Workflows
### For Region2Vec
1. Build universe using preferred method
2. Use universe as tokenization reference
3. Tokenize BED files
4. Train Region2Vec model
### For BEDspace
1. Build universe from all datasets
2. Use universe in preprocessing step
3. Train BEDspace with metadata
4. Query across regions and labels
### For scEmbed
1. Create universe from bulk or aggregated scATAC-seq
2. Use for cell tokenization
3. Train scEmbed model
4. Generate cell embeddings
## Troubleshooting
**Too few regions:** Lower cutoff threshold or reduce filter size
**Too many regions:** Raise cutoff threshold, increase merge distance, or increase filter size
**Noisy boundaries:** Use CCF or ML methods instead of CC
**Long computation:** Start with CC method for quick results, then refine with ML/HMM if needed

View File

@@ -0,0 +1,90 @@
# Region2Vec: Genomic Region Embeddings
## Overview
Region2Vec generates unsupervised embeddings of genomic regions and region sets from BED files. It maps genomic regions to a vocabulary, creates sentences through concatenation, and applies word2vec training to learn meaningful representations.
## When to Use
Use Region2Vec when working with:
- BED file collections requiring dimensionality reduction
- Genomic region similarity analysis
- Downstream ML tasks requiring region feature vectors
- Comparative analysis across multiple genomic datasets
## Workflow
### Step 1: Prepare Data
Gather BED files in a source folder. Optionally specify a file list (default uses all files in the directory). Prepare a universe file as the reference vocabulary for tokenization.
### Step 2: Tokenization
Run hard tokenization to convert genomic regions into tokens:
```python
from geniml.tokenization import hard_tokenization
src_folder = '/path/to/raw/bed/files'
dst_folder = '/path/to/tokenized_files'
universe_file = '/path/to/universe_file.bed'
hard_tokenization(src_folder, dst_folder, universe_file, 1e-9)
```
The final parameter (1e-9) is the p-value threshold for tokenization overlap significance.
### Step 3: Train Region2Vec Model
Execute Region2Vec training on the tokenized files:
```python
from geniml.region2vec import region2vec
region2vec(
token_folder=dst_folder,
save_dir='./region2vec_model',
num_shufflings=1000,
embedding_dim=100,
context_len=50,
window_size=5,
init_lr=0.025
)
```
## Key Parameters
| Parameter | Description | Typical Range |
|-----------|-------------|---------------|
| `init_lr` | Initial learning rate | 0.01 - 0.05 |
| `window_size` | Context window size | 3 - 10 |
| `num_shufflings` | Number of shuffling iterations | 500 - 2000 |
| `embedding_dim` | Dimension of output embeddings | 50 - 300 |
| `context_len` | Context length for training | 30 - 100 |
## CLI Usage
```bash
geniml region2vec --token-folder /path/to/tokens \
--save-dir ./region2vec_model \
--num-shuffle 1000 \
--embed-dim 100 \
--context-len 50 \
--window-size 5 \
--init-lr 0.025
```
## Best Practices
- **Parameter tuning**: Frequently tune `init_lr`, `window_size`, `num_shufflings`, and `embedding_dim` for optimal performance on your specific dataset
- **Universe file**: Use a comprehensive universe file that covers all regions of interest in your analysis
- **Validation**: Always validate tokenization output before proceeding to training
- **Resources**: Training can be computationally intensive; monitor memory usage with large datasets
## Output
The trained model saves embeddings that can be used for:
- Similarity searches across genomic regions
- Clustering region sets
- Feature vectors for downstream ML tasks
- Visualization via dimensionality reduction (t-SNE, UMAP)

View File

@@ -0,0 +1,197 @@
# scEmbed: Single-Cell Embedding Generation
## Overview
scEmbed trains Region2Vec models on single-cell ATAC-seq datasets to generate cell embeddings for clustering and analysis. It provides an unsupervised machine learning framework for representing and analyzing scATAC-seq data in low-dimensional space.
## When to Use
Use scEmbed when working with:
- Single-cell ATAC-seq (scATAC-seq) data requiring clustering
- Cell-type annotation tasks
- Dimensionality reduction for single-cell chromatin accessibility
- Integration with scanpy workflows for downstream analysis
## Workflow
### Step 1: Data Preparation
Input data must be in AnnData format with `.var` attributes containing `chr`, `start`, and `end` values for peaks.
**Starting from raw data** (barcodes.txt, peaks.bed, matrix.mtx):
```python
import scanpy as sc
import pandas as pd
import scipy.io
import anndata
# Load data
barcodes = pd.read_csv('barcodes.txt', header=None, names=['barcode'])
peaks = pd.read_csv('peaks.bed', sep='\t', header=None,
names=['chr', 'start', 'end'])
matrix = scipy.io.mmread('matrix.mtx').tocsr()
# Create AnnData
adata = anndata.AnnData(X=matrix.T, obs=barcodes, var=peaks)
adata.write('scatac_data.h5ad')
```
### Step 2: Pre-tokenization
Convert genomic regions into tokens using gtars utilities. This creates a parquet file with tokenized cells for faster training:
```python
from geniml.io import tokenize_cells
tokenize_cells(
adata='scatac_data.h5ad',
universe_file='universe.bed',
output='tokenized_cells.parquet'
)
```
**Benefits of pre-tokenization:**
- Faster training iterations
- Reduced memory requirements
- Reusable tokenized data for multiple training runs
### Step 3: Model Training
Train the scEmbed model using tokenized data:
```python
from geniml.scembed import ScEmbed
from geniml.region2vec import Region2VecDataset
# Load tokenized dataset
dataset = Region2VecDataset('tokenized_cells.parquet')
# Initialize and train model
model = ScEmbed(
embedding_dim=100,
window_size=5,
negative_samples=5
)
model.train(
dataset=dataset,
epochs=100,
batch_size=256,
learning_rate=0.025
)
# Save model
model.save('scembed_model/')
```
### Step 4: Generate Cell Embeddings
Use the trained model to generate embeddings for cells:
```python
from geniml.scembed import ScEmbed
# Load trained model
model = ScEmbed.from_pretrained('scembed_model/')
# Generate embeddings for AnnData object
embeddings = model.encode(adata)
# Add to AnnData for downstream analysis
adata.obsm['scembed_X'] = embeddings
```
### Step 5: Downstream Analysis
Integrate with scanpy for clustering and visualization:
```python
import scanpy as sc
# Use scEmbed embeddings for neighborhood graph
sc.pp.neighbors(adata, use_rep='scembed_X')
# Cluster cells
sc.tl.leiden(adata, resolution=0.5)
# Compute UMAP for visualization
sc.tl.umap(adata)
# Plot results
sc.pl.umap(adata, color='leiden')
```
## Key Parameters
### Training Parameters
| Parameter | Description | Typical Range |
|-----------|-------------|---------------|
| `embedding_dim` | Dimension of cell embeddings | 50 - 200 |
| `window_size` | Context window for training | 3 - 10 |
| `negative_samples` | Number of negative samples | 5 - 20 |
| `epochs` | Training epochs | 50 - 200 |
| `batch_size` | Training batch size | 128 - 512 |
| `learning_rate` | Initial learning rate | 0.01 - 0.05 |
### Tokenization Parameters
- **Universe file**: Reference BED file defining the genomic vocabulary
- **Overlap threshold**: Minimum overlap for peak-universe matching (typically 1e-9)
## Pre-trained Models
Pre-trained scEmbed models are available on Hugging Face for common reference datasets. Load them using:
```python
from geniml.scembed import ScEmbed
# Load pre-trained model
model = ScEmbed.from_pretrained('databio/scembed-pbmc-10k')
# Generate embeddings
embeddings = model.encode(adata)
```
## Best Practices
- **Data quality**: Use filtered peak-barcode matrices, not raw counts
- **Pre-tokenization**: Always pre-tokenize to improve training efficiency
- **Parameter tuning**: Adjust `embedding_dim` and training epochs based on dataset size
- **Validation**: Use known cell-type markers to validate clustering quality
- **Integration**: Combine with scanpy for comprehensive single-cell analysis
- **Model sharing**: Export trained models to Hugging Face for reproducibility
## Example Dataset
The 10x Genomics PBMC 10k dataset (10,000 peripheral blood mononuclear cells) serves as a standard benchmark:
- Contains diverse immune cell types
- Well-characterized cell populations
- Available from 10x Genomics website
## Cell-Type Annotation
After clustering, annotate cell types using k-nearest neighbors (KNN) with reference datasets:
```python
from geniml.scembed import annotate_celltypes
# Annotate using reference
annotations = annotate_celltypes(
query_adata=adata,
reference_adata=reference,
embedding_key='scembed_X',
k=10
)
adata.obs['cell_type'] = annotations
```
## Output
scEmbed produces:
- Low-dimensional cell embeddings (stored in `adata.obsm`)
- Trained model files for reuse
- Compatible format for scanpy downstream analysis
- Optional export to Hugging Face for sharing

View File

@@ -0,0 +1,385 @@
# Geniml Utilities and Additional Tools
## BBClient: BED File Caching
### Overview
BBClient provides efficient caching of BED files from remote sources, enabling faster repeated access and integration with R workflows.
### When to Use
Use BBClient when:
- Repeatedly accessing BED files from remote databases
- Working with BEDbase repositories
- Integrating genomic data with R pipelines
- Need local caching for performance
### Python Usage
```python
from geniml.bbclient import BBClient
# Initialize client
client = BBClient(cache_folder='~/.bedcache')
# Fetch and cache BED file
bed_file = client.load_bed(bed_id='GSM123456')
# Access cached file
regions = client.get_regions('GSM123456')
```
### R Integration
```r
library(reticulate)
geniml <- import("geniml.bbclient")
# Initialize client
client <- geniml$BBClient(cache_folder='~/.bedcache')
# Load BED file
bed_file <- client$load_bed(bed_id='GSM123456')
```
### Best Practices
- Configure cache directory with sufficient storage
- Use consistent cache locations across analyses
- Clear cache periodically to remove unused files
---
## BEDshift: BED File Randomization
### Overview
BEDshift provides tools for randomizing BED files while preserving genomic context, essential for generating null distributions and statistical testing.
### When to Use
Use BEDshift when:
- Creating null models for statistical testing
- Generating control datasets
- Assessing significance of genomic overlaps
- Benchmarking analysis methods
### Usage
```python
from geniml.bedshift import bedshift
# Randomize BED file preserving chromosome distribution
randomized = bedshift(
input_bed='peaks.bed',
genome='hg38',
preserve_chrom=True,
n_iterations=100
)
```
### CLI Usage
```bash
geniml bedshift \
--input peaks.bed \
--genome hg38 \
--preserve-chrom \
--iterations 100 \
--output randomized_peaks.bed
```
### Randomization Strategies
**Preserve chromosome distribution:**
```python
bedshift(input_bed, genome, preserve_chrom=True)
```
Maintains regions on same chromosomes as original.
**Preserve distance distribution:**
```python
bedshift(input_bed, genome, preserve_distance=True)
```
Maintains inter-region distances.
**Preserve region sizes:**
```python
bedshift(input_bed, genome, preserve_size=True)
```
Keeps original region lengths.
### Best Practices
- Choose randomization strategy matching null hypothesis
- Generate multiple iterations for robust statistics
- Validate randomized output maintains desired properties
- Document randomization parameters for reproducibility
---
## Evaluation: Model Assessment Tools
### Overview
Geniml provides evaluation utilities for assessing embedding quality and model performance.
### When to Use
Use evaluation tools when:
- Validating trained embeddings
- Comparing different models
- Assessing clustering quality
- Publishing model results
### Embedding Evaluation
```python
from geniml.evaluation import evaluate_embeddings
# Evaluate Region2Vec embeddings
metrics = evaluate_embeddings(
embeddings_file='region2vec_model/embeddings.npy',
labels_file='metadata.csv',
metrics=['silhouette', 'davies_bouldin', 'calinski_harabasz']
)
print(f"Silhouette score: {metrics['silhouette']:.3f}")
print(f"Davies-Bouldin index: {metrics['davies_bouldin']:.3f}")
```
### Clustering Metrics
**Silhouette score:** Measures cluster cohesion and separation (-1 to 1, higher better)
**Davies-Bouldin index:** Average similarity between clusters (≥0, lower better)
**Calinski-Harabasz score:** Ratio of between/within cluster dispersion (higher better)
### scEmbed Cell-Type Annotation Evaluation
```python
from geniml.evaluation import evaluate_annotation
# Evaluate cell-type predictions
results = evaluate_annotation(
predicted=adata.obs['predicted_celltype'],
true=adata.obs['true_celltype'],
metrics=['accuracy', 'f1', 'confusion_matrix']
)
print(f"Accuracy: {results['accuracy']:.1%}")
print(f"F1 score: {results['f1']:.3f}")
```
### Best Practices
- Use multiple complementary metrics
- Compare against baseline models
- Report metrics on held-out test data
- Visualize embeddings (UMAP/t-SNE) alongside metrics
---
## Tokenization: Region Tokenization Utilities
### Overview
Tokenization converts genomic regions into discrete tokens using a reference universe, enabling word2vec-style training.
### When to Use
Tokenization is a required preprocessing step for:
- Region2Vec training
- scEmbed model training
- Any embedding method requiring discrete tokens
### Hard Tokenization
Strict overlap-based tokenization:
```python
from geniml.tokenization import hard_tokenization
hard_tokenization(
src_folder='bed_files/',
dst_folder='tokenized/',
universe_file='universe.bed',
p_value_threshold=1e-9
)
```
**Parameters:**
- `p_value_threshold`: Significance level for overlap (typically 1e-9 or 1e-6)
### Soft Tokenization
Probabilistic tokenization allowing partial matches:
```python
from geniml.tokenization import soft_tokenization
soft_tokenization(
src_folder='bed_files/',
dst_folder='tokenized/',
universe_file='universe.bed',
overlap_threshold=0.5
)
```
**Parameters:**
- `overlap_threshold`: Minimum overlap fraction (0-1)
### Universe-Based Tokenization
Map regions to universe tokens with custom parameters:
```python
from geniml.tokenization import universe_tokenization
universe_tokenization(
bed_file='peaks.bed',
universe_file='universe.bed',
output_file='tokens.txt',
method='hard',
threshold=1e-9
)
```
### Best Practices
- **Universe quality**: Use comprehensive, well-constructed universes
- **Threshold selection**: More stringent (lower p-value) for higher confidence
- **Validation**: Check tokenization coverage (what % of regions tokenized)
- **Consistency**: Use same universe and parameters across related analyses
### Tokenization Coverage
Check how well regions tokenize:
```python
from geniml.tokenization import check_coverage
coverage = check_coverage(
bed_file='peaks.bed',
universe_file='universe.bed',
threshold=1e-9
)
print(f"Tokenization coverage: {coverage:.1%}")
```
Aim for >80% coverage for reliable training.
---
## Text2BedNN: Search Backend
### Overview
Text2BedNN creates neural network-based search backends for querying genomic regions using natural language or metadata.
### When to Use
Use Text2BedNN when:
- Building search interfaces for genomic databases
- Enabling natural language queries over BED files
- Creating metadata-aware search systems
- Deploying interactive genomic search applications
### Workflow
**Step 1: Prepare embeddings**
Train BEDspace or Region2Vec model with metadata.
**Step 2: Build search index**
```python
from geniml.search import build_search_index
build_search_index(
embeddings_file='bedspace_model/embeddings.npy',
metadata_file='metadata.csv',
output_dir='search_backend/'
)
```
**Step 3: Query the index**
```python
from geniml.search import SearchBackend
backend = SearchBackend.load('search_backend/')
# Natural language query
results = backend.query(
text="T cell regulatory regions",
top_k=10
)
# Metadata query
results = backend.query(
metadata={'cell_type': 'T_cell', 'tissue': 'blood'},
top_k=10
)
```
### Best Practices
- Train embeddings with rich metadata for better search
- Index large collections for comprehensive coverage
- Validate search relevance on known queries
- Deploy with API for interactive applications
---
## Additional Tools
### I/O Utilities
```python
from geniml.io import read_bed, write_bed, load_universe
# Read BED file
regions = read_bed('peaks.bed')
# Write BED file
write_bed(regions, 'output.bed')
# Load universe
universe = load_universe('universe.bed')
```
### Model Utilities
```python
from geniml.models import save_model, load_model
# Save trained model
save_model(model, 'my_model/')
# Load model
model = load_model('my_model/')
```
### Common Patterns
**Pipeline workflow:**
```python
# 1. Build universe
universe = build_universe(coverage_folder='coverage/', method='cc', cutoff=5)
# 2. Tokenize
hard_tokenization(src_folder='beds/', dst_folder='tokens/',
universe_file='universe.bed', p_value_threshold=1e-9)
# 3. Train embeddings
region2vec(token_folder='tokens/', save_dir='model/', num_shufflings=1000)
# 4. Evaluate
metrics = evaluate_embeddings(embeddings_file='model/embeddings.npy',
labels_file='metadata.csv')
```
This modular design allows flexible composition of geniml tools for diverse genomic ML workflows.

View File

@@ -0,0 +1,279 @@
---
name: gtars
description: High-performance toolkit for genomic interval analysis in Rust with Python bindings. Use when working with genomic regions, BED files, coverage tracks, overlap detection, tokenization for ML models, or fragment analysis in computational genomics and machine learning applications.
---
# Gtars: Genomic Tools and Algorithms in Rust
## Overview
Gtars is a high-performance Rust toolkit for manipulating, analyzing, and processing genomic interval data. It provides specialized tools for overlap detection, coverage analysis, tokenization for machine learning, and reference sequence management.
Use this skill when working with:
- Genomic interval files (BED format)
- Overlap detection between genomic regions
- Coverage track generation (WIG, BigWig)
- Genomic ML preprocessing and tokenization
- Fragment analysis in single-cell genomics
- Reference sequence retrieval and validation
## Installation
### Python Installation
Install gtars Python bindings:
```bash
uv pip install gtars
```
### CLI Installation
Install command-line tools (requires Rust/Cargo):
```bash
# Install with all features
cargo install gtars-cli --features "uniwig overlaprs igd bbcache scoring fragsplit"
# Or install specific features only
cargo install gtars-cli --features "uniwig overlaprs"
```
### Rust Library
Add to Cargo.toml for Rust projects:
```toml
[dependencies]
gtars = { version = "0.1", features = ["tokenizers", "overlaprs"] }
```
## Core Capabilities
Gtars is organized into specialized modules, each focused on specific genomic analysis tasks:
### 1. Overlap Detection and IGD Indexing
Efficiently detect overlaps between genomic intervals using the Integrated Genome Database (IGD) data structure.
**When to use:**
- Finding overlapping regulatory elements
- Variant annotation
- Comparing ChIP-seq peaks
- Identifying shared genomic features
**Quick example:**
```python
import gtars
# Build IGD index and query overlaps
igd = gtars.igd.build_index("regions.bed")
overlaps = igd.query("chr1", 1000, 2000)
```
See `references/overlap.md` for comprehensive overlap detection documentation.
### 2. Coverage Track Generation
Generate coverage tracks from sequencing data with the uniwig module.
**When to use:**
- ATAC-seq accessibility profiles
- ChIP-seq coverage visualization
- RNA-seq read coverage
- Differential coverage analysis
**Quick example:**
```bash
# Generate BigWig coverage track
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
```
See `references/coverage.md` for detailed coverage analysis workflows.
### 3. Genomic Tokenization
Convert genomic regions into discrete tokens for machine learning applications, particularly for deep learning models on genomic data.
**When to use:**
- Preprocessing for genomic ML models
- Integration with geniml library
- Creating position encodings
- Training transformer models on genomic sequences
**Quick example:**
```python
from gtars.tokenizers import TreeTokenizer
tokenizer = TreeTokenizer.from_bed_file("training_regions.bed")
token = tokenizer.tokenize("chr1", 1000, 2000)
```
See `references/tokenizers.md` for tokenization documentation.
### 4. Reference Sequence Management
Handle reference genome sequences and compute digests following the GA4GH refget protocol.
**When to use:**
- Validating reference genome integrity
- Extracting specific genomic sequences
- Computing sequence digests
- Cross-reference comparisons
**Quick example:**
```python
# Load reference and extract sequences
store = gtars.RefgetStore.from_fasta("hg38.fa")
sequence = store.get_subsequence("chr1", 1000, 2000)
```
See `references/refget.md` for reference sequence operations.
### 5. Fragment Processing
Split and analyze fragment files, particularly useful for single-cell genomics data.
**When to use:**
- Processing single-cell ATAC-seq data
- Splitting fragments by cell barcodes
- Cluster-based fragment analysis
- Fragment quality control
**Quick example:**
```bash
# Split fragments by clusters
gtars fragsplit cluster-split --input fragments.tsv --clusters clusters.txt --output-dir ./by_cluster/
```
See `references/cli.md` for fragment processing commands.
### 6. Fragment Scoring
Score fragment overlaps against reference datasets.
**When to use:**
- Evaluating fragment enrichment
- Comparing experimental data to references
- Quality metrics computation
- Batch scoring across samples
**Quick example:**
```bash
# Score fragments against reference
gtars scoring score --fragments fragments.bed --reference reference.bed --output scores.txt
```
## Common Workflows
### Workflow 1: Peak Overlap Analysis
Identify overlapping genomic features:
```python
import gtars
# Load two region sets
peaks = gtars.RegionSet.from_bed("chip_peaks.bed")
promoters = gtars.RegionSet.from_bed("promoters.bed")
# Find overlaps
overlapping_peaks = peaks.filter_overlapping(promoters)
# Export results
overlapping_peaks.to_bed("peaks_in_promoters.bed")
```
### Workflow 2: Coverage Track Pipeline
Generate coverage tracks for visualization:
```bash
# Step 1: Generate coverage
gtars uniwig generate --input atac_fragments.bed --output coverage.wig --resolution 10
# Step 2: Convert to BigWig for genome browsers
gtars uniwig generate --input atac_fragments.bed --output coverage.bw --format bigwig
```
### Workflow 3: ML Preprocessing
Prepare genomic data for machine learning:
```python
from gtars.tokenizers import TreeTokenizer
import gtars
# Step 1: Load training regions
regions = gtars.RegionSet.from_bed("training_peaks.bed")
# Step 2: Create tokenizer
tokenizer = TreeTokenizer.from_bed_file("training_peaks.bed")
# Step 3: Tokenize regions
tokens = [tokenizer.tokenize(r.chromosome, r.start, r.end) for r in regions]
# Step 4: Use tokens in ML pipeline
# (integrate with geniml or custom models)
```
## Python vs CLI Usage
**Use Python API when:**
- Integrating with analysis pipelines
- Need programmatic control
- Working with NumPy/Pandas
- Building custom workflows
**Use CLI when:**
- Quick one-off analyses
- Shell scripting
- Batch processing files
- Prototyping workflows
## Reference Documentation
Comprehensive module documentation:
- **`references/python-api.md`** - Complete Python API reference with RegionSet operations, NumPy integration, and data export
- **`references/overlap.md`** - IGD indexing, overlap detection, and set operations
- **`references/coverage.md`** - Coverage track generation with uniwig
- **`references/tokenizers.md`** - Genomic tokenization for ML applications
- **`references/refget.md`** - Reference sequence management and digests
- **`references/cli.md`** - Command-line interface complete reference
## Integration with geniml
Gtars serves as the foundation for the geniml Python package, providing core genomic interval operations for machine learning workflows. When working on geniml-related tasks, use gtars for data preprocessing and tokenization.
## Performance Characteristics
- **Native Rust performance**: Fast execution with low memory overhead
- **Parallel processing**: Multi-threaded operations for large datasets
- **Memory efficiency**: Streaming and memory-mapped file support
- **Zero-copy operations**: NumPy integration with minimal data copying
## Data Formats
Gtars works with standard genomic formats:
- **BED**: Genomic intervals (3-column or extended)
- **WIG/BigWig**: Coverage tracks
- **FASTA**: Reference sequences
- **Fragment TSV**: Single-cell fragment files with barcodes
## Error Handling and Debugging
Enable verbose logging for troubleshooting:
```python
import gtars
# Enable debug logging
gtars.set_log_level("DEBUG")
```
```bash
# CLI verbose mode
gtars --verbose <command>
```

View File

@@ -0,0 +1,222 @@
# Command-Line Interface
Gtars provides a comprehensive CLI for genomic interval analysis directly from the terminal.
## Installation
```bash
# Install with all features
cargo install gtars-cli --features "uniwig overlaprs igd bbcache scoring fragsplit"
# Install specific features only
cargo install gtars-cli --features "uniwig overlaprs"
```
## Global Options
```bash
# Display help
gtars --help
# Show version
gtars --version
# Verbose output
gtars --verbose <command>
# Quiet mode
gtars --quiet <command>
```
## IGD Commands
Build and query IGD indexes for overlap detection:
```bash
# Build IGD index
gtars igd build --input regions.bed --output regions.igd
# Query single region
gtars igd query --index regions.igd --region "chr1:1000-2000"
# Query from file
gtars igd query --index regions.igd --query-file queries.bed --output results.bed
# Count overlaps
gtars igd count --index regions.igd --query-file queries.bed
```
## Overlap Commands
Compute overlaps between genomic region sets:
```bash
# Find overlapping regions
gtars overlaprs overlap --set-a regions_a.bed --set-b regions_b.bed --output overlaps.bed
# Count overlaps
gtars overlaprs count --set-a regions_a.bed --set-b regions_b.bed
# Filter regions by overlap
gtars overlaprs filter --input regions.bed --filter overlapping.bed --output filtered.bed
# Subtract regions
gtars overlaprs subtract --set-a regions_a.bed --set-b regions_b.bed --output difference.bed
```
## Uniwig Commands
Generate coverage tracks from genomic intervals:
```bash
# Generate coverage track
gtars uniwig generate --input fragments.bed --output coverage.wig
# Specify resolution
gtars uniwig generate --input fragments.bed --output coverage.wig --resolution 10
# Generate BigWig
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
# Strand-specific coverage
gtars uniwig generate --input fragments.bed --output forward.wig --strand +
```
## BBCache Commands
Cache and manage BED files from BEDbase.org:
```bash
# Cache BED file from bedbase
gtars bbcache fetch --id <bedbase_id> --output cached.bed
# List cached files
gtars bbcache list
# Clear cache
gtars bbcache clear
# Update cache
gtars bbcache update
```
## Scoring Commands
Score fragment overlaps against reference datasets:
```bash
# Score fragments
gtars scoring score --fragments fragments.bed --reference reference.bed --output scores.txt
# Batch scoring
gtars scoring batch --fragments-dir ./fragments/ --reference reference.bed --output-dir ./scores/
# Score with weights
gtars scoring score --fragments fragments.bed --reference reference.bed --weights weights.txt --output scores.txt
```
## FragSplit Commands
Split fragment files by cell barcodes or clusters:
```bash
# Split by barcode
gtars fragsplit split --input fragments.tsv --barcodes barcodes.txt --output-dir ./split/
# Split by clusters
gtars fragsplit cluster-split --input fragments.tsv --clusters clusters.txt --output-dir ./clustered/
# Filter fragments
gtars fragsplit filter --input fragments.tsv --min-fragments 100 --output filtered.tsv
```
## Common Workflows
### Workflow 1: Overlap Analysis Pipeline
```bash
# Step 1: Build IGD index for reference
gtars igd build --input reference_regions.bed --output reference.igd
# Step 2: Query with experimental data
gtars igd query --index reference.igd --query-file experimental.bed --output overlaps.bed
# Step 3: Generate statistics
gtars overlaprs count --set-a experimental.bed --set-b reference_regions.bed
```
### Workflow 2: Coverage Track Generation
```bash
# Step 1: Generate coverage
gtars uniwig generate --input fragments.bed --output coverage.wig --resolution 10
# Step 2: Convert to BigWig
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
```
### Workflow 3: Fragment Processing
```bash
# Step 1: Filter fragments
gtars fragsplit filter --input raw_fragments.tsv --min-fragments 100 --output filtered.tsv
# Step 2: Split by clusters
gtars fragsplit cluster-split --input filtered.tsv --clusters clusters.txt --output-dir ./by_cluster/
# Step 3: Score against reference
gtars scoring batch --fragments-dir ./by_cluster/ --reference reference.bed --output-dir ./scores/
```
## Input/Output Formats
### BED Format
Standard 3-column or extended BED format:
```
chr1 1000 2000
chr1 3000 4000
chr2 5000 6000
```
### Fragment Format (TSV)
Tab-separated format for single-cell fragments:
```
chr1 1000 2000 BARCODE1
chr1 3000 4000 BARCODE2
chr2 5000 6000 BARCODE1
```
### WIG Format
Wiggle format for coverage tracks:
```
fixedStep chrom=chr1 start=1000 step=10
12
15
18
```
## Performance Options
```bash
# Set thread count
gtars --threads 8 <command>
# Memory limit
gtars --memory-limit 4G <command>
# Buffer size
gtars --buffer-size 10000 <command>
```
## Error Handling
```bash
# Continue on errors
gtars --continue-on-error <command>
# Strict mode (fail on warnings)
gtars --strict <command>
# Log to file
gtars --log-file output.log <command>
```

View File

@@ -0,0 +1,172 @@
# Coverage Analysis with Uniwig
The uniwig module generates coverage tracks from sequencing data, providing efficient conversion of genomic intervals to coverage profiles.
## Coverage Track Generation
Create coverage tracks from BED files:
```python
import gtars
# Generate coverage from BED file
coverage = gtars.uniwig.coverage_from_bed("fragments.bed")
# Generate coverage with specific resolution
coverage = gtars.uniwig.coverage_from_bed("fragments.bed", resolution=10)
# Generate strand-specific coverage
fwd_coverage = gtars.uniwig.coverage_from_bed("fragments.bed", strand="+")
rev_coverage = gtars.uniwig.coverage_from_bed("fragments.bed", strand="-")
```
## CLI Usage
Generate coverage tracks from command line:
```bash
# Generate coverage track
gtars uniwig generate --input fragments.bed --output coverage.wig
# Specify resolution
gtars uniwig generate --input fragments.bed --output coverage.wig --resolution 10
# Generate BigWig format
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
# Strand-specific coverage
gtars uniwig generate --input fragments.bed --output forward.wig --strand +
gtars uniwig generate --input fragments.bed --output reverse.wig --strand -
```
## Working with Coverage Data
### Accessing Coverage Values
Query coverage at specific positions:
```python
# Get coverage at position
cov = coverage.get_coverage("chr1", 1000)
# Get coverage over range
cov_array = coverage.get_coverage_range("chr1", 1000, 2000)
# Get coverage statistics
mean_cov = coverage.mean_coverage("chr1", 1000, 2000)
max_cov = coverage.max_coverage("chr1", 1000, 2000)
```
### Coverage Operations
Perform operations on coverage tracks:
```python
# Normalize coverage
normalized = coverage.normalize()
# Smooth coverage
smoothed = coverage.smooth(window_size=10)
# Combine coverage tracks
combined = coverage1.add(coverage2)
# Compute coverage difference
diff = coverage1.subtract(coverage2)
```
## Output Formats
Uniwig supports multiple output formats:
### WIG Format
Standard wiggle format:
```
fixedStep chrom=chr1 start=1000 step=1
12
15
18
22
...
```
### BigWig Format
Binary format for efficient storage and access:
```bash
# Generate BigWig
gtars uniwig generate --input fragments.bed --output coverage.bw --format bigwig
```
### BedGraph Format
Flexible format for variable coverage:
```
chr1 1000 1001 12
chr1 1001 1002 15
chr1 1002 1003 18
```
## Use Cases
### ATAC-seq Analysis
Generate chromatin accessibility profiles:
```python
# Generate ATAC-seq coverage
atac_fragments = gtars.RegionSet.from_bed("atac_fragments.bed")
coverage = gtars.uniwig.coverage_from_bed("atac_fragments.bed", resolution=1)
# Identify accessible regions
peaks = coverage.call_peaks(threshold=10)
```
### ChIP-seq Peak Visualization
Create coverage tracks for ChIP-seq data:
```bash
# Generate coverage for visualization
gtars uniwig generate --input chip_seq_fragments.bed \
--output chip_coverage.bw \
--format bigwig
```
### RNA-seq Coverage
Compute read coverage for RNA-seq:
```python
# Generate strand-specific RNA-seq coverage
fwd = gtars.uniwig.coverage_from_bed("rnaseq.bed", strand="+")
rev = gtars.uniwig.coverage_from_bed("rnaseq.bed", strand="-")
# Export for IGV
fwd.to_bigwig("rnaseq_fwd.bw")
rev.to_bigwig("rnaseq_rev.bw")
```
### Differential Coverage Analysis
Compare coverage between samples:
```python
# Generate coverage for two samples
control = gtars.uniwig.coverage_from_bed("control.bed")
treatment = gtars.uniwig.coverage_from_bed("treatment.bed")
# Compute fold change
fold_change = treatment.divide(control)
# Find differential regions
diff_regions = fold_change.find_regions(threshold=2.0)
```
## Performance Optimization
- Use appropriate resolution for data scale
- BigWig format recommended for large datasets
- Parallel processing available for multiple chromosomes
- Memory-efficient streaming for large files

View File

@@ -0,0 +1,156 @@
# Overlap Detection and IGD
The overlaprs module provides efficient overlap detection between genomic intervals using the Integrated Genome Database (IGD) data structure.
## IGD Index
IGD (Integrated Genome Database) is a specialized data structure for fast genomic interval queries and overlap detection.
### Building an IGD Index
Create indexes from genomic region files:
```python
import gtars
# Build IGD index from BED file
igd = gtars.igd.build_index("regions.bed")
# Save index for reuse
igd.save("regions.igd")
# Load existing index
igd = gtars.igd.load_index("regions.igd")
```
### Querying Overlaps
Find overlapping regions efficiently:
```python
# Query a single region
overlaps = igd.query("chr1", 1000, 2000)
# Query multiple regions
results = []
for chrom, start, end in query_regions:
overlaps = igd.query(chrom, start, end)
results.append(overlaps)
# Get overlap counts only
count = igd.count_overlaps("chr1", 1000, 2000)
```
## CLI Usage
The overlaprs command-line tool provides overlap detection:
```bash
# Find overlaps between two BED files
gtars overlaprs query --index regions.bed --query query_regions.bed
# Count overlaps
gtars overlaprs count --index regions.bed --query query_regions.bed
# Output overlapping regions
gtars overlaprs overlap --index regions.bed --query query_regions.bed --output overlaps.bed
```
### IGD CLI Commands
Build and query IGD indexes:
```bash
# Build IGD index
gtars igd build --input regions.bed --output regions.igd
# Query IGD index
gtars igd query --index regions.igd --region "chr1:1000-2000"
# Batch query from file
gtars igd query --index regions.igd --query-file queries.bed --output results.bed
```
## Python API
### Overlap Detection
Compute overlaps between region sets:
```python
import gtars
# Load two region sets
set_a = gtars.RegionSet.from_bed("regions_a.bed")
set_b = gtars.RegionSet.from_bed("regions_b.bed")
# Find overlaps
overlaps = set_a.overlap(set_b)
# Get regions from A that overlap with B
overlapping_a = set_a.filter_overlapping(set_b)
# Get regions from A that don't overlap with B
non_overlapping_a = set_a.filter_non_overlapping(set_b)
```
### Overlap Statistics
Calculate overlap metrics:
```python
# Count overlaps
overlap_count = set_a.count_overlaps(set_b)
# Calculate overlap fraction
overlap_fraction = set_a.overlap_fraction(set_b)
# Get overlap coverage
coverage = set_a.overlap_coverage(set_b)
```
## Performance Characteristics
IGD provides efficient querying:
- **Index construction**: O(n log n) where n is number of regions
- **Query time**: O(k + log n) where k is number of overlaps
- **Memory efficient**: Compact representation of genomic intervals
## Use Cases
### Regulatory Element Analysis
Identify overlap between genomic features:
```python
# Find transcription factor binding sites overlapping promoters
tfbs = gtars.RegionSet.from_bed("chip_seq_peaks.bed")
promoters = gtars.RegionSet.from_bed("promoters.bed")
overlapping_tfbs = tfbs.filter_overlapping(promoters)
print(f"Found {len(overlapping_tfbs)} TFBS in promoters")
```
### Variant Annotation
Annotate variants with overlapping features:
```python
# Check which variants overlap with coding regions
variants = gtars.RegionSet.from_bed("variants.bed")
cds = gtars.RegionSet.from_bed("coding_sequences.bed")
coding_variants = variants.filter_overlapping(cds)
```
### Chromatin State Analysis
Compare chromatin states across samples:
```python
# Find regions with consistent chromatin states
sample1 = gtars.RegionSet.from_bed("sample1_peaks.bed")
sample2 = gtars.RegionSet.from_bed("sample2_peaks.bed")
consistent_regions = sample1.overlap(sample2)
```

View File

@@ -0,0 +1,211 @@
# Python API Reference
Comprehensive reference for gtars Python bindings.
## Installation
```bash
# Install gtars Python package
uv pip install gtars
# Or with pip
pip install gtars
```
## Core Classes
### RegionSet
Manage collections of genomic intervals:
```python
import gtars
# Create from BED file
regions = gtars.RegionSet.from_bed("regions.bed")
# Create from coordinates
regions = gtars.RegionSet([
("chr1", 1000, 2000),
("chr1", 3000, 4000),
("chr2", 5000, 6000)
])
# Access regions
for region in regions:
print(f"{region.chromosome}:{region.start}-{region.end}")
# Get region count
num_regions = len(regions)
# Get total coverage
total_coverage = regions.total_coverage()
```
### Region Operations
Perform operations on region sets:
```python
# Sort regions
sorted_regions = regions.sort()
# Merge overlapping regions
merged = regions.merge()
# Filter by size
large_regions = regions.filter_by_size(min_size=1000)
# Filter by chromosome
chr1_regions = regions.filter_by_chromosome("chr1")
```
### Set Operations
Perform set operations on genomic regions:
```python
# Load two region sets
set_a = gtars.RegionSet.from_bed("set_a.bed")
set_b = gtars.RegionSet.from_bed("set_b.bed")
# Union
union = set_a.union(set_b)
# Intersection
intersection = set_a.intersect(set_b)
# Difference
difference = set_a.subtract(set_b)
# Symmetric difference
sym_diff = set_a.symmetric_difference(set_b)
```
## Data Export
### Writing BED Files
Export regions to BED format:
```python
# Write to BED file
regions.to_bed("output.bed")
# Write with scores
regions.to_bed("output.bed", scores=score_array)
# Write with names
regions.to_bed("output.bed", names=name_list)
```
### Format Conversion
Convert between formats:
```python
# BED to JSON
regions = gtars.RegionSet.from_bed("input.bed")
regions.to_json("output.json")
# JSON to BED
regions = gtars.RegionSet.from_json("input.json")
regions.to_bed("output.bed")
```
## NumPy Integration
Seamless integration with NumPy arrays:
```python
import numpy as np
# Export to NumPy arrays
starts = regions.starts_array() # NumPy array of start positions
ends = regions.ends_array() # NumPy array of end positions
sizes = regions.sizes_array() # NumPy array of region sizes
# Create from NumPy arrays
chromosomes = ["chr1"] * len(starts)
regions = gtars.RegionSet.from_arrays(chromosomes, starts, ends)
```
## Parallel Processing
Leverage parallel processing for large datasets:
```python
# Enable parallel processing
regions = gtars.RegionSet.from_bed("large_file.bed", parallel=True)
# Parallel operations
result = regions.parallel_apply(custom_function)
```
## Memory Management
Efficient memory usage for large datasets:
```python
# Stream large BED files
for chunk in gtars.RegionSet.stream_bed("large_file.bed", chunk_size=10000):
process_chunk(chunk)
# Memory-mapped mode
regions = gtars.RegionSet.from_bed("large_file.bed", mmap=True)
```
## Error Handling
Handle common errors:
```python
try:
regions = gtars.RegionSet.from_bed("file.bed")
except gtars.FileNotFoundError:
print("File not found")
except gtars.InvalidFormatError as e:
print(f"Invalid BED format: {e}")
except gtars.ParseError as e:
print(f"Parse error at line {e.line}: {e.message}")
```
## Configuration
Configure gtars behavior:
```python
# Set global options
gtars.set_option("parallel.threads", 4)
gtars.set_option("memory.limit", "4GB")
gtars.set_option("warnings.strict", True)
# Context manager for temporary options
with gtars.option_context("parallel.threads", 8):
# Use 8 threads for this block
regions = gtars.RegionSet.from_bed("large_file.bed", parallel=True)
```
## Logging
Enable logging for debugging:
```python
import logging
# Enable gtars logging
gtars.set_log_level("DEBUG")
# Or use Python logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger("gtars")
```
## Performance Tips
- Use parallel processing for large datasets
- Enable memory-mapped mode for very large files
- Stream data when possible to reduce memory usage
- Pre-sort regions before operations when applicable
- Use NumPy arrays for numerical computations
- Cache frequently accessed data

View File

@@ -0,0 +1,147 @@
# Reference Sequence Management
The refget module handles reference sequence retrieval and digest computation, following the refget protocol for sequence identification.
## RefgetStore
RefgetStore manages reference sequences and their digests:
```python
import gtars
# Create RefgetStore
store = gtars.RefgetStore()
# Add sequence
store.add_sequence("chr1", sequence_data)
# Retrieve sequence
seq = store.get_sequence("chr1")
# Get sequence digest
digest = store.get_digest("chr1")
```
## Sequence Digests
Compute and verify sequence digests:
```python
# Compute digest for sequence
from gtars.refget import compute_digest
digest = compute_digest(sequence_data)
# Verify digest matches
is_valid = store.verify_digest("chr1", expected_digest)
```
## Integration with Reference Genomes
Work with standard reference genomes:
```python
# Load reference genome
store = gtars.RefgetStore.from_fasta("hg38.fa")
# Get chromosome sequences
chr1 = store.get_sequence("chr1")
chr2 = store.get_sequence("chr2")
# Get subsequence
region_seq = store.get_subsequence("chr1", 1000, 2000)
```
## CLI Usage
Manage reference sequences from command line:
```bash
# Compute digest for FASTA file
gtars refget digest --input genome.fa --output digests.txt
# Verify sequence digest
gtars refget verify --sequence sequence.fa --digest expected_digest
```
## Refget Protocol Compliance
The refget module follows the GA4GH refget protocol:
### Digest Computation
Digests are computed using SHA-512 truncated to 48 bytes:
```python
# Compute refget-compliant digest
digest = gtars.refget.compute_digest(sequence)
# Returns: "SQ.abc123..."
```
### Sequence Retrieval
Retrieve sequences by digest:
```python
# Get sequence by refget digest
seq = store.get_sequence_by_digest("SQ.abc123...")
```
## Use Cases
### Reference Validation
Verify reference genome integrity:
```python
# Compute digests for reference
store = gtars.RefgetStore.from_fasta("reference.fa")
digests = {chrom: store.get_digest(chrom) for chrom in store.chromosomes}
# Compare with expected digests
for chrom, expected in expected_digests.items():
actual = digests[chrom]
if actual != expected:
print(f"Mismatch for {chrom}: {actual} != {expected}")
```
### Sequence Extraction
Extract specific genomic regions:
```python
# Extract regions of interest
store = gtars.RefgetStore.from_fasta("hg38.fa")
regions = [
("chr1", 1000, 2000),
("chr2", 5000, 6000),
("chr3", 10000, 11000)
]
sequences = [store.get_subsequence(c, s, e) for c, s, e in regions]
```
### Cross-Reference Comparison
Compare sequences across different references:
```python
# Load two reference versions
hg19 = gtars.RefgetStore.from_fasta("hg19.fa")
hg38 = gtars.RefgetStore.from_fasta("hg38.fa")
# Compare digests
for chrom in hg19.chromosomes:
digest_19 = hg19.get_digest(chrom)
digest_38 = hg38.get_digest(chrom)
if digest_19 != digest_38:
print(f"{chrom} differs between hg19 and hg38")
```
## Performance Notes
- Sequences loaded on demand
- Digests cached after computation
- Efficient subsequence extraction
- Memory-mapped file support for large genomes

View File

@@ -0,0 +1,103 @@
# Genomic Tokenizers
Tokenizers convert genomic regions into discrete tokens for machine learning applications, particularly useful for training genomic deep learning models.
## Python API
### Creating a Tokenizer
Load tokenizer configurations from various sources:
```python
import gtars
# From BED file
tokenizer = gtars.tokenizers.TreeTokenizer.from_bed_file("regions.bed")
# From configuration file
tokenizer = gtars.tokenizers.TreeTokenizer.from_config("tokenizer_config.yaml")
# From region string
tokenizer = gtars.tokenizers.TreeTokenizer.from_region_string("chr1:1000-2000")
```
### Tokenizing Genomic Regions
Convert genomic coordinates to tokens:
```python
# Tokenize a single region
token = tokenizer.tokenize("chr1", 1000, 2000)
# Tokenize multiple regions
tokens = []
for chrom, start, end in regions:
token = tokenizer.tokenize(chrom, start, end)
tokens.append(token)
```
### Token Properties
Access token information:
```python
# Get token ID
token_id = token.id
# Get genomic coordinates
chrom = token.chromosome
start = token.start
end = token.end
# Get token metadata
metadata = token.metadata
```
## Use Cases
### Machine Learning Preprocessing
Tokenizers are essential for preparing genomic data for ML models:
1. **Sequence modeling**: Convert genomic intervals into discrete tokens for transformer models
2. **Position encoding**: Create consistent positional encodings across datasets
3. **Data augmentation**: Generate alternative tokenizations for training
### Integration with geniml
The tokenizers module integrates seamlessly with the geniml library for genomic ML:
```python
# Tokenize regions for geniml
from gtars.tokenizers import TreeTokenizer
import geniml
tokenizer = TreeTokenizer.from_bed_file("training_regions.bed")
tokens = [tokenizer.tokenize(r.chrom, r.start, r.end) for r in regions]
# Use tokens in geniml models
model = geniml.Model(vocab_size=tokenizer.vocab_size)
```
## Configuration Format
Tokenizer configuration files support YAML format:
```yaml
# tokenizer_config.yaml
type: tree
resolution: 1000 # Token resolution in base pairs
chromosomes:
- chr1
- chr2
- chr3
options:
overlap_handling: merge
gap_threshold: 100
```
## Performance Considerations
- TreeTokenizer uses efficient data structures for fast tokenization
- Batch tokenization is recommended for large datasets
- Pre-loading tokenizers reduces overhead for repeated operations