Improve the Hugging Face transformers skill

This commit is contained in:
Timothy Kassis
2025-11-03 16:44:15 -08:00
parent 86d8878eeb
commit c56fa43747
12 changed files with 2041 additions and 2705 deletions

View File

@@ -1,349 +1,157 @@
---
name: transformers
description: Work with state-of-the-art machine learning models for NLP, computer vision, audio, and multimodal tasks using HuggingFace Transformers. This skill should be used when fine-tuning pre-trained models, performing inference with pipelines, generating text, training sequence models, or working with BERT, GPT, T5, ViT, and other transformer architectures. Covers model loading, tokenization, training with Trainer API, text generation strategies, and task-specific patterns for classification, NER, QA, summarization, translation, and image tasks. (plugin:scientific-packages@claude-scientific-skills)
description: This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.
---
# Transformers
## Overview
The Transformers library provides state-of-the-art machine learning models for NLP, computer vision, audio, and multimodal tasks. Apply this skill for quick inference through pipelines, comprehensive training via the Trainer API, and flexible text generation with various decoding strategies.
The Hugging Face Transformers library provides access to thousands of pre-trained models for tasks across NLP, computer vision, audio, and multimodal domains. Use this skill to load models, perform inference, and fine-tune on custom data.
## Core Capabilities
## Installation
### 1. Quick Inference with Pipelines
Install transformers and core dependencies:
For rapid inference without complex setup, use the `pipeline()` API. Pipelines abstract away tokenization, model invocation, and post-processing.
```bash
uv pip install torch transformers datasets evaluate accelerate
```
For vision tasks, add:
```bash
uv pip install timm pillow
```
For audio tasks, add:
```bash
uv pip install librosa soundfile
```
## Authentication
Many models on the Hugging Face Hub require authentication. Set up access:
```python
from huggingface_hub import login
login() # Follow prompts to enter token
```
Or set environment variable:
```bash
export HUGGINGFACE_TOKEN="your_token_here"
```
Get tokens at: https://huggingface.co/settings/tokens
## Quick Start
Use the Pipeline API for fast inference without manual configuration:
```python
from transformers import pipeline
# Text generation
generator = pipeline("text-generation", model="gpt2")
result = generator("The future of AI is", max_length=50)
# Text classification
classifier = pipeline("text-classification")
result = classifier("This product is amazing!")
# Named entity recognition
ner = pipeline("token-classification")
entities = ner("Sarah works at Microsoft in Seattle")
result = classifier("This movie was excellent!")
# Question answering
qa = pipeline("question-answering")
answer = qa(question="What is the capital?", context="Paris is the capital of France.")
# Text generation
generator = pipeline("text-generation", model="gpt2")
text = generator("Once upon a time", max_length=50)
# Image classification
image_classifier = pipeline("image-classification")
predictions = image_classifier("image.jpg")
result = qa(question="What is AI?", context="AI is artificial intelligence...")
```
**When to use pipelines:**
- Quick prototyping and testing
- Simple inference tasks without custom logic
- Demonstrations and examples
- Production inference for standard tasks
## Core Capabilities
**Available pipeline tasks:**
- **NLP**: text-classification, token-classification, question-answering, summarization, translation, text-generation, fill-mask, zero-shot-classification
- **Vision**: image-classification, object-detection, image-segmentation, depth-estimation, zero-shot-image-classification
- **Audio**: automatic-speech-recognition, audio-classification, text-to-audio
- **Multimodal**: image-to-text, visual-question-answering, image-text-to-text
### 1. Pipelines for Quick Inference
For comprehensive pipeline documentation, see `references/pipelines.md`.
Use for simple, optimized inference across many tasks. Supports text generation, classification, NER, question answering, summarization, translation, image classification, object detection, audio classification, and more.
### 2. Model Training and Fine-Tuning
**When to use**: Quick prototyping, simple inference tasks, no custom preprocessing needed.
Use the Trainer API for comprehensive model training with support for distributed training, mixed precision, and advanced optimization.
See `references/pipelines.md` for comprehensive task coverage and optimization.
**Basic training workflow:**
### 2. Model Loading and Management
Load pre-trained models with fine-grained control over configuration, device placement, and precision.
**When to use**: Custom model initialization, advanced device management, model inspection.
See `references/models.md` for loading patterns and best practices.
### 3. Text Generation
Generate text with LLMs using various decoding strategies (greedy, beam search, sampling) and control parameters (temperature, top-k, top-p).
**When to use**: Creative text generation, code generation, conversational AI, text completion.
See `references/generation.md` for generation strategies and parameters.
### 4. Training and Fine-Tuning
Fine-tune pre-trained models on custom datasets using the Trainer API with automatic mixed precision, distributed training, and logging.
**When to use**: Task-specific model adaptation, domain adaptation, improving model performance.
See `references/training.md` for training workflows and best practices.
### 5. Tokenization
Convert text to tokens and token IDs for model input, with padding, truncation, and special token handling.
**When to use**: Custom preprocessing pipelines, understanding model inputs, batch processing.
See `references/tokenizers.md` for tokenization details.
## Common Patterns
### Pattern 1: Simple Inference
For straightforward tasks, use pipelines:
```python
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TrainingArguments,
Trainer
)
from datasets import load_dataset
pipe = pipeline("task-name", model="model-id")
output = pipe(input_data)
```
# 1. Load and tokenize data
dataset = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
### Pattern 2: Custom Model Usage
For advanced control, load model and tokenizer separately:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenizer = AutoTokenizer.from_pretrained("model-id")
model = AutoModelForCausalLM.from_pretrained("model-id", device_map="auto")
tokenized_datasets = dataset.map(tokenize_function, batched=True)
inputs = tokenizer("text", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
result = tokenizer.decode(outputs[0])
```
# 2. Load model
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels=2
)
### Pattern 3: Fine-Tuning
For task adaptation, use Trainer:
```python
from transformers import Trainer, TrainingArguments
# 3. Configure training
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=16,
eval_strategy="epoch",
save_strategy="epoch",
load_best_model_at_end=True,
per_device_train_batch_size=8,
)
# 4. Create trainer and train
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
train_dataset=train_dataset,
)
trainer.train()
```
**Key training features:**
- Mixed precision training (fp16/bf16)
- Distributed training (multi-GPU, multi-node)
- Gradient accumulation
- Learning rate scheduling with warmup
- Checkpoint management
- Hyperparameter search
- Push to Hugging Face Hub
## Reference Documentation
For detailed training documentation, see `references/training.md`.
### 3. Text Generation
Generate text using various decoding strategies including greedy decoding, beam search, sampling, and more.
**Generation strategies:**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("gpt2")
inputs = tokenizer("Once upon a time", return_tensors="pt")
# Greedy decoding (deterministic)
outputs = model.generate(**inputs, max_new_tokens=50)
# Beam search (explores multiple hypotheses)
outputs = model.generate(
**inputs,
max_new_tokens=50,
num_beams=5,
early_stopping=True
)
# Sampling (creative, diverse)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.7,
top_p=0.9,
top_k=50
)
```
**Generation parameters:**
- `temperature`: Controls randomness (0.1-2.0)
- `top_k`: Sample from top-k tokens
- `top_p`: Nucleus sampling threshold
- `num_beams`: Number of beams for beam search
- `repetition_penalty`: Discourage repetition
- `no_repeat_ngram_size`: Prevent repeating n-grams
For comprehensive generation documentation, see `references/generation_strategies.md`.
### 4. Task-Specific Patterns
Common task patterns with appropriate model classes:
**Text Classification:**
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels=3,
id2label={0: "negative", 1: "neutral", 2: "positive"}
)
```
**Named Entity Recognition (Token Classification):**
```python
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained(
"bert-base-uncased",
num_labels=9 # Number of entity types
)
```
**Question Answering:**
```python
from transformers import AutoModelForQuestionAnswering
model = AutoModelForQuestionAnswering.from_pretrained("bert-base-uncased")
```
**Summarization and Translation (Seq2Seq):**
```python
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
```
**Image Classification:**
```python
from transformers import AutoModelForImageClassification
model = AutoModelForImageClassification.from_pretrained(
"google/vit-base-patch16-224",
num_labels=num_classes
)
```
For detailed task-specific workflows including data preprocessing, training, and evaluation, see `references/task_patterns.md`.
## Auto Classes
Use Auto classes for automatic architecture selection based on model checkpoints:
```python
from transformers import (
AutoTokenizer, # Tokenization
AutoModel, # Base model (hidden states)
AutoModelForSequenceClassification,
AutoModelForTokenClassification,
AutoModelForQuestionAnswering,
AutoModelForCausalLM, # GPT-style
AutoModelForMaskedLM, # BERT-style
AutoModelForSeq2SeqLM, # T5, BART
AutoProcessor, # For multimodal models
AutoImageProcessor, # For vision models
)
# Load any model by name
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
```
For comprehensive API documentation, see `references/api_reference.md`.
## Model Loading and Optimization
**Device placement:**
```python
model = AutoModel.from_pretrained("bert-base-uncased", device_map="auto")
```
**Mixed precision:**
```python
model = AutoModel.from_pretrained(
"model-name",
torch_dtype=torch.float16 # or torch.bfloat16
)
```
**Quantization:**
```python
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.float16
)
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-2-7b-hf",
quantization_config=quantization_config,
device_map="auto"
)
```
## Common Workflows
### Quick Inference Workflow
1. Choose appropriate pipeline for task
2. Load pipeline with optional model specification
3. Pass inputs and get results
4. For batch processing, pass list of inputs
**See:** `scripts/quick_inference.py` for comprehensive pipeline examples
### Training Workflow
1. Load and preprocess dataset using 🤗 Datasets
2. Tokenize data with appropriate tokenizer
3. Load pre-trained model for specific task
4. Configure TrainingArguments
5. Create Trainer with model, data, and compute_metrics
6. Train with `trainer.train()`
7. Evaluate with `trainer.evaluate()`
8. Save model and optionally push to Hub
**See:** `scripts/fine_tune_classifier.py` for complete training example
### Text Generation Workflow
1. Load causal or seq2seq language model
2. Load tokenizer and tokenize prompt
3. Choose generation strategy (greedy, beam search, sampling)
4. Configure generation parameters
5. Generate with `model.generate()`
6. Decode output tokens to text
**See:** `scripts/generate_text.py` for generation strategy examples
## Best Practices
1. **Use Auto classes** for flexibility across different model architectures
2. **Batch processing** for efficiency - process multiple inputs at once
3. **Device management** - use `device_map="auto"` for automatic placement
4. **Memory optimization** - enable fp16/bf16 or quantization for large models
5. **Checkpoint management** - save checkpoints regularly and load best model
6. **Pipeline for quick tasks** - use pipelines for standard inference tasks
7. **Custom metrics** - define compute_metrics for task-specific evaluation
8. **Gradient accumulation** - use for large effective batch sizes on limited memory
9. **Learning rate warmup** - typically 5-10% of total training steps
10. **Hub integration** - push trained models to Hub for sharing and versioning
## Resources
### scripts/
Executable Python scripts demonstrating common Transformers workflows:
- `quick_inference.py` - Pipeline examples for NLP, vision, audio, and multimodal tasks
- `fine_tune_classifier.py` - Complete fine-tuning workflow with Trainer API
- `generate_text.py` - Text generation with various decoding strategies
Run scripts directly to see examples in action:
```bash
python scripts/quick_inference.py
python scripts/fine_tune_classifier.py
python scripts/generate_text.py
```
### references/
Comprehensive reference documentation loaded into context as needed:
- `api_reference.md` - Core classes and APIs (Auto classes, Trainer, GenerationConfig, etc.)
- `pipelines.md` - All available pipelines organized by modality with examples
- `training.md` - Training patterns, TrainingArguments, distributed training, callbacks
- `generation_strategies.md` - Text generation methods, decoding strategies, parameters
- `task_patterns.md` - Complete workflows for common tasks (classification, NER, QA, summarization, etc.)
When working on specific tasks or features, load the relevant reference file for detailed guidance.
## Additional Information
- **Official Documentation**: https://huggingface.co/docs/transformers/index
- **Model Hub**: https://huggingface.co/models (1M+ pre-trained models)
- **Datasets Hub**: https://huggingface.co/datasets
- **Installation**: `pip install transformers datasets evaluate accelerate`
- **GPU Support**: Requires PyTorch or TensorFlow with CUDA
- **Framework Support**: PyTorch (primary), TensorFlow, JAX/Flax
For detailed information on specific components:
- **Pipelines**: `references/pipelines.md` - All supported tasks and optimization
- **Models**: `references/models.md` - Loading, saving, and configuration
- **Generation**: `references/generation.md` - Text generation strategies and parameters
- **Training**: `references/training.md` - Fine-tuning with Trainer API
- **Tokenizers**: `references/tokenizers.md` - Tokenization and preprocessing