dgx-spark-playbooks/skills/dgx-spark-unsloth/SKILL.md
Jason Kneen a680d0472b feat: scaffold skills plugin from DGX Spark playbooks
Adds a Claude Code plugin structure that exposes each NVIDIA DGX Spark
playbook as a triggerable skill, with an index skill ('dgx-spark') that
routes users to the right leaf based on intent and encodes the
relationship graph between playbooks (prerequisites, alternatives,
composes-with, upgrade paths).

Structure:
- overrides/*.md       hand-curated frontmatter + Related sections
- scripts/generate.mjs zero-dep Node generator: nvidia + overrides → skills
- scripts/install.sh   symlinks skills into ~/.claude/skills (--plugin mode available)
- skills/              committed, browsable, installable without Node
- .github/workflows/   auto-regenerates skills/ when playbooks/overrides change

Initial curated leaves: ollama, open-webui, vllm, connect-to-your-spark.
Remaining 37 leaves use generator fallback (title + tagline + summary
extracted from README) and can be curated incrementally via overrides/.
2026-04-19 10:22:08 +01:00

1.5 KiB
Raw Blame History

name description
dgx-spark-unsloth Optimized fine-tuning with Unsloth — on NVIDIA DGX Spark. Use when setting up unsloth on Spark hardware.

Unsloth on DGX Spark

Optimized fine-tuning with Unsloth

  • Performance-first: It claims to speed up training (e.g. 2× faster on single GPU, up to 30× in multi-GPU setups) and reduce memory usage compared to standard methods.
  • Kernel-level optimizations: Core compute is built with custom kernels (e.g. with Triton) and hand-optimized math to boost throughput and efficiency.
  • Quantization & model formats: Supports dynamic quantization (4-bit, 16-bit) and GGUF formats to reduce footprint, while aiming to retain accuracy.
  • Broad model support: Works with many LLMs (LLaMA, Mistral, Qwen, DeepSeek, etc.) and allows training, fine-tuning, exporting to formats like Ollama, vLLM, GGUF, Hugging Face.
  • Simplified interface: Provides easy-to-use notebooks and tools so users can fine-tune models with minimal boilerplate.

Outcome: You'll set up Unsloth for optimized fine-tuning of large language models on NVIDIA Spark devices, achieving up to 2x faster training speeds with reduced memory usage through efficient parameter-efficient fine-tuning methods like LoRA and QLoRA.

Duration: 30-60 minutes for initial setup and test run

Full playbook: /Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/unsloth/README.md