mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-26 20:03:52 +00:00
Adds a Claude Code plugin structure that exposes each NVIDIA DGX Spark
playbook as a triggerable skill, with an index skill ('dgx-spark') that
routes users to the right leaf based on intent and encodes the
relationship graph between playbooks (prerequisites, alternatives,
composes-with, upgrade paths).
Structure:
- overrides/*.md hand-curated frontmatter + Related sections
- scripts/generate.mjs zero-dep Node generator: nvidia + overrides → skills
- scripts/install.sh symlinks skills into ~/.claude/skills (--plugin mode available)
- skills/ committed, browsable, installable without Node
- .github/workflows/ auto-regenerates skills/ when playbooks/overrides change
Initial curated leaves: ollama, open-webui, vllm, connect-to-your-spark.
Remaining 37 leaves use generator fallback (title + tagline + summary
extracted from README) and can be curated incrementally via overrides/.
23 lines
1.3 KiB
Markdown
23 lines
1.3 KiB
Markdown
---
|
|
name: dgx-spark-nemotron
|
|
description: Run Nemotron-3-Nano-30B model using llama.cpp on DGX Spark — on NVIDIA DGX Spark. Use when setting up nemotron on Spark hardware.
|
|
---
|
|
|
|
<!-- GENERATED:BEGIN from nvidia/nemotron/README.md -->
|
|
# Nemotron-3-Nano with llama.cpp
|
|
|
|
> Run Nemotron-3-Nano-30B model using llama.cpp on DGX Spark
|
|
|
|
Nemotron-3-Nano-30B-A3B is NVIDIA's powerful language model featuring a 30 billion parameter Mixture of Experts (MoE) architecture with only 3 billion active parameters. This efficient design enables high-quality inference with lower computational requirements, making it ideal for DGX Spark's GB10 GPU.
|
|
|
|
This playbook demonstrates how to run Nemotron-3-Nano using llama.cpp, which compiles CUDA kernels at build time specifically for your GPU architecture. The model includes built-in reasoning (thinking mode) and tool calling support via the chat template.
|
|
|
|
**Outcome**: You will have a fully functional Nemotron-3-Nano-30B-A3B inference server running on your DGX Spark, accessible via an OpenAI-compatible API. This setup enables:
|
|
|
|
- Local LLM inference
|
|
- OpenAI-compatible API endpoint for easy integration with existing tools
|
|
- Built-in reasoning and tool calling capabilities
|
|
|
|
**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nemotron/README.md`
|
|
<!-- GENERATED:END -->
|