From 4d0d20d39f0e2b831b0ad143500629645adb3dd0 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" <41898282+github-actions[bot]@users.noreply.github.com> Date: Sun, 19 Apr 2026 09:25:00 +0000 Subject: [PATCH] chore: regenerate skills/ from upstream playbooks [skip ci] --- skills/dgx-spark-comfy-ui/SKILL.md | 2 +- skills/dgx-spark-connect-three-sparks/SKILL.md | 2 +- skills/dgx-spark-connect-to-your-spark/SKILL.md | 2 +- skills/dgx-spark-connect-two-sparks/SKILL.md | 2 +- skills/dgx-spark-cuda-x-data-science/SKILL.md | 2 +- skills/dgx-spark-dgx-dashboard/SKILL.md | 2 +- skills/dgx-spark-flux-finetuning/SKILL.md | 2 +- skills/dgx-spark-isaac/SKILL.md | 2 +- skills/dgx-spark-jax/SKILL.md | 2 +- skills/dgx-spark-live-vlm-webui/SKILL.md | 2 +- skills/dgx-spark-llama-cpp/SKILL.md | 2 +- skills/dgx-spark-llama-factory/SKILL.md | 2 +- skills/dgx-spark-lm-studio/SKILL.md | 2 +- skills/dgx-spark-multi-agent-chatbot/SKILL.md | 2 +- skills/dgx-spark-multi-modal-inference/SKILL.md | 2 +- skills/dgx-spark-multi-sparks-through-switch/SKILL.md | 2 +- skills/dgx-spark-nccl/SKILL.md | 2 +- skills/dgx-spark-nemo-fine-tune/SKILL.md | 2 +- skills/dgx-spark-nemoclaw/SKILL.md | 2 +- skills/dgx-spark-nemotron/SKILL.md | 2 +- skills/dgx-spark-nim-llm/SKILL.md | 2 +- skills/dgx-spark-nvfp4-quantization/SKILL.md | 2 +- skills/dgx-spark-ollama/SKILL.md | 2 +- skills/dgx-spark-open-webui/SKILL.md | 2 +- skills/dgx-spark-openclaw/SKILL.md | 2 +- skills/dgx-spark-openshell/SKILL.md | 2 +- skills/dgx-spark-portfolio-optimization/SKILL.md | 2 +- skills/dgx-spark-pytorch-fine-tune/SKILL.md | 2 +- skills/dgx-spark-rag-ai-workbench/SKILL.md | 2 +- skills/dgx-spark-sglang/SKILL.md | 2 +- skills/dgx-spark-single-cell/SKILL.md | 2 +- skills/dgx-spark-spark-reachy-photo-booth/SKILL.md | 2 +- skills/dgx-spark-speculative-decoding/SKILL.md | 2 +- skills/dgx-spark-tailscale/SKILL.md | 2 +- skills/dgx-spark-trt-llm/SKILL.md | 2 +- skills/dgx-spark-txt2kg/SKILL.md | 2 +- skills/dgx-spark-unsloth/SKILL.md | 2 +- skills/dgx-spark-vibe-coding/SKILL.md | 2 +- skills/dgx-spark-vllm/SKILL.md | 2 +- skills/dgx-spark-vscode/SKILL.md | 2 +- skills/dgx-spark-vss/SKILL.md | 2 +- 41 files changed, 41 insertions(+), 41 deletions(-) diff --git a/skills/dgx-spark-comfy-ui/SKILL.md b/skills/dgx-spark-comfy-ui/SKILL.md index da33803..ce7aacc 100644 --- a/skills/dgx-spark-comfy-ui/SKILL.md +++ b/skills/dgx-spark-comfy-ui/SKILL.md @@ -16,5 +16,5 @@ Workflows are saved as JSON files, so you can version them for future work, coll **Outcome**: You'll install and configure ComfyUI on your NVIDIA DGX Spark device so you can use the unified memory to work with large models. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/comfy-ui/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/comfy-ui/README.md` diff --git a/skills/dgx-spark-connect-three-sparks/SKILL.md b/skills/dgx-spark-connect-three-sparks/SKILL.md index 2738f99..ba892a5 100644 --- a/skills/dgx-spark-connect-three-sparks/SKILL.md +++ b/skills/dgx-spark-connect-three-sparks/SKILL.md @@ -16,5 +16,5 @@ DGX Spark nodes by establishing network connectivity and configuring SSH authent interfaces for cluster communication, and establish passwordless SSH between nodes to create a functional distributed computing environment. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/connect-three-sparks/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/connect-three-sparks/README.md` diff --git a/skills/dgx-spark-connect-to-your-spark/SKILL.md b/skills/dgx-spark-connect-to-your-spark/SKILL.md index 3fd7b37..992d739 100644 --- a/skills/dgx-spark-connect-to-your-spark/SKILL.md +++ b/skills/dgx-spark-connect-to-your-spark/SKILL.md @@ -22,7 +22,7 @@ integrated app launching, while manual SSH gives you direct command-line control forwarding capabilities. Both approaches enable you to run terminal commands, access web applications, and manage your DGX Spark remotely from your laptop. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/connect-to-your-spark/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/connect-to-your-spark/README.md` ## When to use this skill diff --git a/skills/dgx-spark-connect-two-sparks/SKILL.md b/skills/dgx-spark-connect-two-sparks/SKILL.md index cf50f29..b110aaa 100644 --- a/skills/dgx-spark-connect-two-sparks/SKILL.md +++ b/skills/dgx-spark-connect-two-sparks/SKILL.md @@ -16,5 +16,5 @@ by establishing network connectivity and configuring SSH authentication. interfaces for cluster communication, and establish passwordless SSH between nodes to create a functional distributed computing environment. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/connect-two-sparks/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/connect-two-sparks/README.md` diff --git a/skills/dgx-spark-cuda-x-data-science/SKILL.md b/skills/dgx-spark-cuda-x-data-science/SKILL.md index 14771bc..f14d7ca 100644 --- a/skills/dgx-spark-cuda-x-data-science/SKILL.md +++ b/skills/dgx-spark-cuda-x-data-science/SKILL.md @@ -17,5 +17,5 @@ CUDA-X Data Science (formally RAPIDS) is an open-source library collection that **Outcome**: You will accelerate popular machine learning algorithms and data analytics operations GPU. You will understand how to accelerate popular Python tools, and the value of running data science workflows on your DGX Spark. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/cuda-x-data-science/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/cuda-x-data-science/README.md` diff --git a/skills/dgx-spark-dgx-dashboard/SKILL.md b/skills/dgx-spark-dgx-dashboard/SKILL.md index cdcb4ad..a4dfa49 100644 --- a/skills/dgx-spark-dgx-dashboard/SKILL.md +++ b/skills/dgx-spark-dgx-dashboard/SKILL.md @@ -12,5 +12,5 @@ The DGX Dashboard is a web application that runs locally on DGX Spark devices, p **Outcome**: You will learn how to access and use the DGX Dashboard on your DGX Spark device. By the end of this walkthrough, you will be able to launch JupyterLab instances with pre-configured Python environments, monitor GPU performance, manage system updates, and run a sample AI workload using Stable Diffusion. You'll understand multiple access methods including desktop shortcuts, NVIDIA Sync, and manual SSH tunneling. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/dgx-dashboard/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/dgx-dashboard/README.md` diff --git a/skills/dgx-spark-flux-finetuning/SKILL.md b/skills/dgx-spark-flux-finetuning/SKILL.md index c63f057..cacea61 100644 --- a/skills/dgx-spark-flux-finetuning/SKILL.md +++ b/skills/dgx-spark-flux-finetuning/SKILL.md @@ -24,5 +24,5 @@ The setup includes: Duration: * 30-45 minutes for initial setup model download time -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/flux-finetuning/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/flux-finetuning/README.md` diff --git a/skills/dgx-spark-isaac/SKILL.md b/skills/dgx-spark-isaac/SKILL.md index 6aa76b2..63bdb2d 100644 --- a/skills/dgx-spark-isaac/SKILL.md +++ b/skills/dgx-spark-isaac/SKILL.md @@ -14,5 +14,5 @@ Isaac Sim uses GPU-accelerated physics simulation to enable fast, realistic robo **Outcome**: You'll build Isaac Sim from source on your NVIDIA DGX Spark device and set up Isaac Lab for reinforcement learning experiments. This includes compiling the Isaac Sim engine, configuring the development environment, and running a sample RL training task to verify the installation. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/isaac/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/isaac/README.md` diff --git a/skills/dgx-spark-jax/SKILL.md b/skills/dgx-spark-jax/SKILL.md index 795c666..63e211f 100644 --- a/skills/dgx-spark-jax/SKILL.md +++ b/skills/dgx-spark-jax/SKILL.md @@ -21,5 +21,5 @@ JAX lets you write **NumPy-style Python code** and run it fast on GPUs without w high-performance machine learning prototyping using familiar NumPy-like abstractions, complete with GPU acceleration and performance optimization capabilities. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/jax/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/jax/README.md` diff --git a/skills/dgx-spark-live-vlm-webui/SKILL.md b/skills/dgx-spark-live-vlm-webui/SKILL.md index 252bbff..88d9bf8 100644 --- a/skills/dgx-spark-live-vlm-webui/SKILL.md +++ b/skills/dgx-spark-live-vlm-webui/SKILL.md @@ -20,5 +20,5 @@ The interface provides WebRTC-based video streaming, integrated GPU monitoring, - Customize prompts for various use cases (object detection, scene description, OCR, safety monitoring) - Access the interface from any device on your network with a web browser -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/live-vlm-webui/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/live-vlm-webui/README.md` diff --git a/skills/dgx-spark-llama-cpp/SKILL.md b/skills/dgx-spark-llama-cpp/SKILL.md index 0698f40..a6e5cd3 100644 --- a/skills/dgx-spark-llama-cpp/SKILL.md +++ b/skills/dgx-spark-llama-cpp/SKILL.md @@ -18,5 +18,5 @@ This playbook walks through that stack end to end. As the model example, it uses - An OpenAI-compatible `/v1/chat/completions` endpoint for tools and apps - A concrete validation that **Gemma 4 31B IT** runs on this stack on DGX Spark -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/llama-cpp/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/llama-cpp/README.md` diff --git a/skills/dgx-spark-llama-factory/SKILL.md b/skills/dgx-spark-llama-factory/SKILL.md index ab3b9cb..36e7a9f 100644 --- a/skills/dgx-spark-llama-factory/SKILL.md +++ b/skills/dgx-spark-llama-factory/SKILL.md @@ -18,5 +18,5 @@ large language models using LLaMA Factory CLI on your NVIDIA Spark device. language models using LoRA, QLoRA, and full fine-tuning methods. This enables efficient model adaptation for specialized domains while leveraging hardware-specific optimizations. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/llama-factory/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/llama-factory/README.md` diff --git a/skills/dgx-spark-lm-studio/SKILL.md b/skills/dgx-spark-lm-studio/SKILL.md index 689d221..d5084d4 100644 --- a/skills/dgx-spark-lm-studio/SKILL.md +++ b/skills/dgx-spark-lm-studio/SKILL.md @@ -21,5 +21,5 @@ This playbook shows you how to deploy LM Studio on an NVIDIA DGX Spark device to - Interact with models from your laptop using the LM Studio SDK - Optionally use **LM Link** to connect Spark and laptop over an encrypted link so remote models appear as local (no same-network or bind setup required) -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/lm-studio/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/lm-studio/README.md` diff --git a/skills/dgx-spark-multi-agent-chatbot/SKILL.md b/skills/dgx-spark-multi-agent-chatbot/SKILL.md index 728fe78..314f1bf 100644 --- a/skills/dgx-spark-multi-agent-chatbot/SKILL.md +++ b/skills/dgx-spark-multi-agent-chatbot/SKILL.md @@ -23,5 +23,5 @@ The setup includes: - Multi-agent system orchestration using a supervisor agent powered by gpt-oss-120B - MCP (Model Context Protocol) servers as tools for the supervisor agent -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/multi-agent-chatbot/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/multi-agent-chatbot/README.md` diff --git a/skills/dgx-spark-multi-modal-inference/SKILL.md b/skills/dgx-spark-multi-modal-inference/SKILL.md index 578f5f0..5b44d23 100644 --- a/skills/dgx-spark-multi-modal-inference/SKILL.md +++ b/skills/dgx-spark-multi-modal-inference/SKILL.md @@ -19,5 +19,5 @@ FP8, FP4). Duration: 45-90 minutes depending on model downloads and optimization steps -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/multi-modal-inference/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/multi-modal-inference/README.md` diff --git a/skills/dgx-spark-multi-sparks-through-switch/SKILL.md b/skills/dgx-spark-multi-sparks-through-switch/SKILL.md index 82f7bc5..7da9d0c 100644 --- a/skills/dgx-spark-multi-sparks-through-switch/SKILL.md +++ b/skills/dgx-spark-multi-sparks-through-switch/SKILL.md @@ -10,5 +10,5 @@ description: Set up a cluster of DGX Spark devices that are connected through Sw Configure four DGX Spark systems for high-speed inter-node communication using 200Gbps QSFP connections through a QSFP switch. This setup enables distributed workloads across multiple DGX Spark nodes by establishing network connectivity and configuring SSH authentication. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/multi-sparks-through-switch/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/multi-sparks-through-switch/README.md` diff --git a/skills/dgx-spark-nccl/SKILL.md b/skills/dgx-spark-nccl/SKILL.md index a35dd27..f322a81 100644 --- a/skills/dgx-spark-nccl/SKILL.md +++ b/skills/dgx-spark-nccl/SKILL.md @@ -19,5 +19,5 @@ and proper GPU topology detection. Duration: 30 minutes for setup and validation · Risk: Medium - involves network configuration changes -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nccl/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nccl/README.md` diff --git a/skills/dgx-spark-nemo-fine-tune/SKILL.md b/skills/dgx-spark-nemo-fine-tune/SKILL.md index 0b7770d..8789fc6 100644 --- a/skills/dgx-spark-nemo-fine-tune/SKILL.md +++ b/skills/dgx-spark-nemo-fine-tune/SKILL.md @@ -12,5 +12,5 @@ This playbook guides you through setting up and using NVIDIA NeMo AutoModel for **Outcome**: You'll establish a complete fine-tuning environment for large language models (1-70B parameters) and vision-language models using NeMo AutoModel on your NVIDIA Spark device. By the end, you'll have a working installation that supports parameter-efficient fine-tuning (PEFT), supervised fine-tuning (SFT), and distributed training capabilities with FP8 precision optimizations, all while maintaining compatibility with the Hugging Face ecosystem. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nemo-fine-tune/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nemo-fine-tune/README.md` diff --git a/skills/dgx-spark-nemoclaw/SKILL.md b/skills/dgx-spark-nemoclaw/SKILL.md index 1880e14..e478d01 100644 --- a/skills/dgx-spark-nemoclaw/SKILL.md +++ b/skills/dgx-spark-nemoclaw/SKILL.md @@ -26,5 +26,5 @@ By the end of this playbook you will have a working AI agent inside an OpenShell ### Notice and disclaimers -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nemoclaw/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nemoclaw/README.md` diff --git a/skills/dgx-spark-nemotron/SKILL.md b/skills/dgx-spark-nemotron/SKILL.md index ad18941..e930c1a 100644 --- a/skills/dgx-spark-nemotron/SKILL.md +++ b/skills/dgx-spark-nemotron/SKILL.md @@ -18,5 +18,5 @@ This playbook demonstrates how to run Nemotron-3-Nano using llama.cpp, which com - OpenAI-compatible API endpoint for easy integration with existing tools - Built-in reasoning and tool calling capabilities -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nemotron/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nemotron/README.md` diff --git a/skills/dgx-spark-nim-llm/SKILL.md b/skills/dgx-spark-nim-llm/SKILL.md index 913b187..0ee3e00 100644 --- a/skills/dgx-spark-nim-llm/SKILL.md +++ b/skills/dgx-spark-nim-llm/SKILL.md @@ -25,5 +25,5 @@ You'll launch a NIM container on your DGX Spark device to expose a GPU-accelerat - Basic familiarity with REST APIs and curl commands - Understanding of NVIDIA GPU environments and CUDA -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nim-llm/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nim-llm/README.md` diff --git a/skills/dgx-spark-nvfp4-quantization/SKILL.md b/skills/dgx-spark-nvfp4-quantization/SKILL.md index 7219f52..9ac327a 100644 --- a/skills/dgx-spark-nvfp4-quantization/SKILL.md +++ b/skills/dgx-spark-nvfp4-quantization/SKILL.md @@ -23,5 +23,5 @@ inside a TensorRT-LLM container, producing an NVFP4 quantized model for deployme The examples use NVIDIA FP4 quantized models which help reduce model size by approximately 2x by reducing the precision of model layers. This quantization approach aims to preserve accuracy while providing significant throughput improvements. However, it's important to note that quantization can potentially impact model accuracy - we recommend running evaluations to verify if the quantized model maintains acceptable performance for your use case. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/nvfp4-quantization/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/nvfp4-quantization/README.md` diff --git a/skills/dgx-spark-ollama/SKILL.md b/skills/dgx-spark-ollama/SKILL.md index 991283f..b16c8fe 100644 --- a/skills/dgx-spark-ollama/SKILL.md +++ b/skills/dgx-spark-ollama/SKILL.md @@ -21,7 +21,7 @@ the powerful GPU capabilities of your Spark device without complex network confi Duration: 10-15 minutes for initial setup, 2-3 minutes for model download (varies by model size) · Risk: Low - No system-level changes, easily reversible by stopping the custom app -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/ollama/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/ollama/README.md` ## When to use this skill diff --git a/skills/dgx-spark-open-webui/SKILL.md b/skills/dgx-spark-open-webui/SKILL.md index ec8b6ae..40c993e 100644 --- a/skills/dgx-spark-open-webui/SKILL.md +++ b/skills/dgx-spark-open-webui/SKILL.md @@ -15,7 +15,7 @@ This playbook shows you how to deploy Open WebUI with an integrated Ollama serve Duration: 15-20 minutes for initial setup, plus model download time (varies by model size) -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/open-webui/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/open-webui/README.md` ## When to use this skill diff --git a/skills/dgx-spark-openclaw/SKILL.md b/skills/dgx-spark-openclaw/SKILL.md index 2f8171d..2410ba2 100644 --- a/skills/dgx-spark-openclaw/SKILL.md +++ b/skills/dgx-spark-openclaw/SKILL.md @@ -16,5 +16,5 @@ Running OpenClaw and its LLMs **fully on your DGX Spark** keeps your data privat Duration: About 30 minutes for install and first-time model setup; model download time depends on size and network (gpt-oss-120b is ~65GB and may take longer on slower connections). · Risk: **Medium to High**—the agent has access to whatever files, tools, and channels you configure. Risk increases significantly if you enable terminal/command execution skills or connect external accounts. Without proper isolation, this setup could expose sensitive data or allow code execution. **Always follow the security measures above.** -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/openclaw/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/openclaw/README.md` diff --git a/skills/dgx-spark-openshell/SKILL.md b/skills/dgx-spark-openshell/SKILL.md index 2bd09b6..e98e49e 100644 --- a/skills/dgx-spark-openshell/SKILL.md +++ b/skills/dgx-spark-openshell/SKILL.md @@ -19,5 +19,5 @@ By combining OpenClaw with OpenShell on DGX Spark, you get the full power of a l **Outcome**: You will install the OpenShell CLI (`openshell`), deploy a gateway on your DGX Spark, and launch OpenClaw inside a sandboxed environment using the pre-built OpenClaw community sandbox. The sandbox enforces filesystem, network, and process isolation by default. You will also configure local inference routing so OpenClaw uses a model running on your Spark without needing external API keys. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/openshell/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/openshell/README.md` diff --git a/skills/dgx-spark-portfolio-optimization/SKILL.md b/skills/dgx-spark-portfolio-optimization/SKILL.md index d6ba4a9..7afa8f7 100644 --- a/skills/dgx-spark-portfolio-optimization/SKILL.md +++ b/skills/dgx-spark-portfolio-optimization/SKILL.md @@ -19,5 +19,5 @@ Portfolio Optimization (PO) involves solving high-dimensional, non-linear numeri - **Real-World Constraint Management:** Implementing constraints including concentration limits, leverage constraints, turnover limits, and cardinality constraints. - **Comprehensive Backtesting:** Evaluating portfolio performance with specific tools for testing rebalancing strategies. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/portfolio-optimization/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/portfolio-optimization/README.md` diff --git a/skills/dgx-spark-pytorch-fine-tune/SKILL.md b/skills/dgx-spark-pytorch-fine-tune/SKILL.md index e2045e2..247169d 100644 --- a/skills/dgx-spark-pytorch-fine-tune/SKILL.md +++ b/skills/dgx-spark-pytorch-fine-tune/SKILL.md @@ -13,5 +13,5 @@ This playbook guides you through setting up and using Pytorch for fine-tuning la **Outcome**: You'll establish a complete fine-tuning environment for large language models (1-70B parameters) on your NVIDIA Spark device. By the end, you'll have a working installation that supports parameter-efficient fine-tuning (PEFT) and supervised fine-tuning (SFT). -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/pytorch-fine-tune/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/pytorch-fine-tune/README.md` diff --git a/skills/dgx-spark-rag-ai-workbench/SKILL.md b/skills/dgx-spark-rag-ai-workbench/SKILL.md index ceb56f4..4047813 100644 --- a/skills/dgx-spark-rag-ai-workbench/SKILL.md +++ b/skills/dgx-spark-rag-ai-workbench/SKILL.md @@ -20,5 +20,5 @@ advanced RAG capabilities including query routing, response evaluation, and iter giving you hands-on experience with both AI Workbench's development environment and sophisticated RAG architectures. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/rag-ai-workbench/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/rag-ai-workbench/README.md` diff --git a/skills/dgx-spark-sglang/SKILL.md b/skills/dgx-spark-sglang/SKILL.md index 9b95043..4103528 100644 --- a/skills/dgx-spark-sglang/SKILL.md +++ b/skills/dgx-spark-sglang/SKILL.md @@ -18,5 +18,5 @@ pre-installed. enabling high-performance LLM serving with support for text generation, chat completion, and vision-language tasks using models like DeepSeek-V2-Lite. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/sglang/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/sglang/README.md` diff --git a/skills/dgx-spark-single-cell/SKILL.md b/skills/dgx-spark-single-cell/SKILL.md index ab9bbed..4257837 100644 --- a/skills/dgx-spark-single-cell/SKILL.md +++ b/skills/dgx-spark-single-cell/SKILL.md @@ -20,5 +20,5 @@ This playbook shows an end-to-end GPU-powered workflow for scRNA-seq using [RAPI 6. Batch Correction and analysis using Harmony, k-nearest neighbors, UMAP, and tSNE 7. Explore the biological information from the data with differential expression analysis and trajectory analysis -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/single-cell/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/single-cell/README.md` diff --git a/skills/dgx-spark-spark-reachy-photo-booth/SKILL.md b/skills/dgx-spark-spark-reachy-photo-booth/SKILL.md index a55ff73..fe77163 100644 --- a/skills/dgx-spark-spark-reachy-photo-booth/SKILL.md +++ b/skills/dgx-spark-spark-reachy-photo-booth/SKILL.md @@ -19,5 +19,5 @@ Spark & Reachy Photo Booth is an interactive and event-driven photo booth demo t **Outcome**: You'll deploy a complete photo booth system on DGX Spark running multiple inference models locally — LLM, image generation, speech recognition, speech generation, and computer vision — all without cloud dependencies. The Reachy robot interacts with users through natural conversation, captures photos, and generates custom images based on prompts, demonstrating real-time multimodal AI processing on edge hardware. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/spark-reachy-photo-booth/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/spark-reachy-photo-booth/README.md` diff --git a/skills/dgx-spark-speculative-decoding/SKILL.md b/skills/dgx-spark-speculative-decoding/SKILL.md index 4a77b20..4743eff 100644 --- a/skills/dgx-spark-speculative-decoding/SKILL.md +++ b/skills/dgx-spark-speculative-decoding/SKILL.md @@ -14,5 +14,5 @@ This way, the big model doesn't need to predict every token step-by-step, reduci **Outcome**: You'll explore speculative decoding using TensorRT-LLM on NVIDIA Spark using two approaches: EAGLE-3 and Draft-Target. These examples demonstrate how to accelerate large language model inference while maintaining output quality. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/speculative-decoding/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/speculative-decoding/README.md` diff --git a/skills/dgx-spark-tailscale/SKILL.md b/skills/dgx-spark-tailscale/SKILL.md index 4838c5d..4f59a52 100644 --- a/skills/dgx-spark-tailscale/SKILL.md +++ b/skills/dgx-spark-tailscale/SKILL.md @@ -22,5 +22,5 @@ all traffic automatically encrypted and NAT traversal handled transparently. Duration: 15-30 minutes for initial setup, 5 minutes per additional device -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/tailscale/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/tailscale/README.md` diff --git a/skills/dgx-spark-trt-llm/SKILL.md b/skills/dgx-spark-trt-llm/SKILL.md index bbf52d0..c015abd 100644 --- a/skills/dgx-spark-trt-llm/SKILL.md +++ b/skills/dgx-spark-trt-llm/SKILL.md @@ -19,5 +19,5 @@ inference through kernel-level optimizations, efficient memory layouts, and adva Duration: 45-60 minutes for setup and API server deployment · Risk: Medium - container pulls and model downloads may fail due to network issues -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/trt-llm/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/trt-llm/README.md` diff --git a/skills/dgx-spark-txt2kg/SKILL.md b/skills/dgx-spark-txt2kg/SKILL.md index 39a3f8a..bc5c77d 100644 --- a/skills/dgx-spark-txt2kg/SKILL.md +++ b/skills/dgx-spark-txt2kg/SKILL.md @@ -26,5 +26,5 @@ The setup includes: Duration: - 2-3 minutes for initial setup and container deployment -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/txt2kg/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/txt2kg/README.md` diff --git a/skills/dgx-spark-unsloth/SKILL.md b/skills/dgx-spark-unsloth/SKILL.md index eb9bce9..3a34c3a 100644 --- a/skills/dgx-spark-unsloth/SKILL.md +++ b/skills/dgx-spark-unsloth/SKILL.md @@ -20,5 +20,5 @@ parameter-efficient fine-tuning methods like LoRA and QLoRA. Duration: 30-60 minutes for initial setup and test run -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/unsloth/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/unsloth/README.md` diff --git a/skills/dgx-spark-vibe-coding/SKILL.md b/skills/dgx-spark-vibe-coding/SKILL.md index a234770..c9e8c38 100644 --- a/skills/dgx-spark-vibe-coding/SKILL.md +++ b/skills/dgx-spark-vibe-coding/SKILL.md @@ -26,5 +26,5 @@ You'll have a fully configured DGX Spark system capable of: - DGX Spark (128GB unified memory recommended) -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/vibe-coding/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/vibe-coding/README.md` diff --git a/skills/dgx-spark-vllm/SKILL.md b/skills/dgx-spark-vllm/SKILL.md index b0a8f69..2fbb96f 100644 --- a/skills/dgx-spark-vllm/SKILL.md +++ b/skills/dgx-spark-vllm/SKILL.md @@ -18,7 +18,7 @@ vLLM is an inference engine designed to run large language models efficiently. T either using a pre-built Docker container or building from source with custom LLVM/Triton support for ARM64. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/vllm/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/vllm/README.md` ## When to use this skill diff --git a/skills/dgx-spark-vscode/SKILL.md b/skills/dgx-spark-vscode/SKILL.md index 94dbf7c..3a2a3bb 100644 --- a/skills/dgx-spark-vscode/SKILL.md +++ b/skills/dgx-spark-vscode/SKILL.md @@ -16,5 +16,5 @@ This walkthrough will help you set up Visual Studio Code, a full-featured IDE wi **Outcome**: You will have VS Code set up for development on your DGX Spark device with access to the system's ARM64 architecture and GPU resources. This setup enables direct code development, debugging, and execution. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/vscode/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/vscode/README.md` diff --git a/skills/dgx-spark-vss/SKILL.md b/skills/dgx-spark-vss/SKILL.md index 34fc85c..7e1580d 100644 --- a/skills/dgx-spark-vss/SKILL.md +++ b/skills/dgx-spark-vss/SKILL.md @@ -12,5 +12,5 @@ Deploy NVIDIA's Video Search and Summarization (VSS) AI Blueprint to build intel **Outcome**: You will deploy NVIDIA's VSS AI Blueprint on NVIDIA Spark hardware with Blackwell architecture, choosing between two deployment scenarios: VSS Event Reviewer (completely local with VLM pipeline) or Standard VSS (hybrid deployment with remote LLM/embedding endpoints). This includes setting up Alert Bridge, VLM Pipeline, Alert Inspector UI, Video Storage Toolkit, and optional DeepStream CV pipeline for automated video analysis and event review. -**Full playbook**: `/Users/jkneen/Documents/GitHub/dgx-spark-playbooks/nvidia/vss/README.md` +**Full playbook**: `/home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/vss/README.md`