Update SKILL.md files to add double quotation marks for all skills, ensuring clarity and consistency across all entries.

This commit is contained in:
Haoxuan "Orion" Li
2025-10-20 20:51:50 -07:00
parent c0fa308af8
commit c2b16829f6
68 changed files with 68 additions and 68 deletions

View File

@@ -1,6 +1,6 @@
---
name: pytorch-lightning
description: PyTorch Lightning deep learning framework skill for organizing PyTorch code and automating training workflows. Use this skill for: creating LightningModules with training_step/validation_step hooks, implementing DataModules for data loading and preprocessing, configuring Trainer with accelerators/devices/strategies, setting up distributed training (DDP/FSDP/DeepSpeed), implementing callbacks (ModelCheckpoint/EarlyStopping), configuring loggers (TensorBoard/WandB/MLflow), converting PyTorch code to Lightning format, optimizing performance with mixed precision/gradient accumulation, debugging with fast_dev_run/overfit_batches, checkpointing and resuming training, hyperparameter tuning with Tuner, handling multi-GPU/multi-node training, memory optimization for large models, experiment tracking and reproducibility, custom training loops, validation/testing workflows, prediction pipelines, and production deployment. Includes templates, API references, distributed training guides, and best practices for efficient deep learning development.
description: "PyTorch Lightning deep learning framework skill for organizing PyTorch code and automating training workflows. Use this skill for: creating LightningModules with training_step/validation_step hooks, implementing DataModules for data loading and preprocessing, configuring Trainer with accelerators/devices/strategies, setting up distributed training (DDP/FSDP/DeepSpeed), implementing callbacks (ModelCheckpoint/EarlyStopping), configuring loggers (TensorBoard/WandB/MLflow), converting PyTorch code to Lightning format, optimizing performance with mixed precision/gradient accumulation, debugging with fast_dev_run/overfit_batches, checkpointing and resuming training, hyperparameter tuning with Tuner, handling multi-GPU/multi-node training, memory optimization for large models, experiment tracking and reproducibility, custom training loops, validation/testing workflows, prediction pipelines, and production deployment. Includes templates, API references, distributed training guides, and best practices for efficient deep learning development."
---
# PyTorch Lightning