Adds a Claude Code plugin structure that exposes each NVIDIA DGX Spark
playbook as a triggerable skill, with an index skill ('dgx-spark') that
routes users to the right leaf based on intent and encodes the
relationship graph between playbooks (prerequisites, alternatives,
composes-with, upgrade paths).
Structure:
- overrides/*.md hand-curated frontmatter + Related sections
- scripts/generate.mjs zero-dep Node generator: nvidia + overrides → skills
- scripts/install.sh symlinks skills into ~/.claude/skills (--plugin mode available)
- skills/ committed, browsable, installable without Node
- .github/workflows/ auto-regenerates skills/ when playbooks/overrides change
Initial curated leaves: ollama, open-webui, vllm, connect-to-your-spark.
Remaining 37 leaves use generator fallback (title + tagline + summary
extracted from README) and can be curated incrementally via overrides/.
1.5 KiB
| name | description |
|---|---|
| dgx-spark-vibe-coding | Use DGX Spark as a local or remote Vibe Coding assistant with Ollama and Continue — on NVIDIA DGX Spark. Use when setting up vibe-coding on Spark hardware. |
Vibe Coding in VS Code
Use DGX Spark as a local or remote Vibe Coding assistant with Ollama and Continue
This playbook walks you through setting up DGX Spark as a Vibe Coding assistant — locally or as a remote coding companion for VSCode with Continue.dev.
This guide uses Ollama with GPT-OSS 120B to provide easy deployment of a coding assistant to VSCode. Included is advanced instructions to allow DGX Spark and Ollama to provide the coding assistant to be available over your local network. This guide is also written on a fresh installation of the OS. If your OS is not freshly installed and you have issues, see the troubleshooting tab.
What You'll Accomplish
You'll have a fully configured DGX Spark system capable of:
- Running local code assistance through Ollama.
- Serving models remotely for Continue and VSCode integration.
Outcome: You'll have a fully configured DGX Spark system capable of:
- Running local code assistance through Ollama.
- Serving models remotely for Continue and VSCode integration.
- Hosting large LLMs like GPT-OSS 120B using unified memory.
Prerequisites
- DGX Spark (128GB unified memory recommended)
Full playbook: /Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/vibe-coding/README.md