diff --git a/nvidia/comfy-ui/README.md b/nvidia/comfy-ui/README.md index 68c5cd0..1b6fde9 100644 --- a/nvidia/comfy-ui/README.md +++ b/nvidia/comfy-ui/README.md @@ -14,13 +14,11 @@ ## Basic idea -ComfyUI is an open-source web server application for AI image generation using diffusion-based models like SDXL, Flux and others. -It has a browser-based UI that lets you create, edit and run image generation and editing workflows with multiple steps. -Generation and editing steps (e.g. loading a model, adding text or sampling) are configurable in the UI as a node, and you connect nodes with wires to form a workflow. +ComfyUI is an open-source web server application for AI image generation using diffusion-based models like SDXL, Flux, and others. It has a browser-based UI that lets you create, edit, and run image generation and editing workflows with multiple steps. These generation and editing steps (e.g., loading a model, adding text or sampling) are configurable in the UI as a node, and you connect nodes with wires to form a workflow. -ComfyUI uses the host's GPU for inference, so you can install it on your Spark and do all of your image generation and editing directly on device. +ComfyUI uses the host's GPU for inference, so you can install it on your DGX Spark and do all of your image generation and editing directly on your device. -Workflows are saved as JSON files, so you can version them for future work, collaboration and reproducibility. +Workflows are saved as JSON files, so you can version them for future work, collaboration, and reproducibility. ## What you'll accomplish @@ -37,7 +35,7 @@ You'll install and configure ComfyUI on your NVIDIA DGX Spark device so you can ## Prerequisites **Hardware Requirements:** -- NVIDIA Spark device with Blackwell architecture +- NVIDIA Grace Blackwell GB10 Superchip System - Minimum 8GB GPU memory for Stable Diffusion models - At least 20GB available storage space @@ -71,7 +69,7 @@ All required assets can be found [in the ComfyUI repository on GitHub](https://g ## Step 1. Verify system prerequisites -Check that your NVIDIA Spark device meets the requirements before proceeding with installation. +Check that your NVIDIA DGX Spark device meets the requirements before proceeding with installation. ```bash python3 --version @@ -80,7 +78,7 @@ nvcc --version nvidia-smi ``` -Expected output should show Python 3.8+, pip available, CUDA toolkit and GPU detection. +Expected output should show Python 3.8+, pip available, CUDA toolkit, and GPU detection. ## Step 2. Create Python virtual environment @@ -146,7 +144,7 @@ The server will bind to all network interfaces on port 8188, making it accessibl ## Step 8. Validate installation -Check that ComfyUI is running correctly and accessible via web browser. +Check that ComfyUI is running correctly and accessible via your web browser. ```bash curl -I http://localhost:8188 @@ -198,3 +196,6 @@ The image generation should complete within 30-60 seconds depending on your hard ```bash sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches' ``` + + +For latest known issues, please review the [DGX Spark User Guide](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html). diff --git a/nvidia/dgx-dashboard/README.md b/nvidia/dgx-dashboard/README.md index f45db66..456ae79 100644 --- a/nvidia/dgx-dashboard/README.md +++ b/nvidia/dgx-dashboard/README.md @@ -14,11 +14,11 @@ ## Basic idea -The DGX Dashboard is a web application that runs locally on DGX Spark devices, providing a graphical interface for system updates, resource monitoring and an integrated JupyterLab environment. Users can access the dashboard locally from the app launcher or remotely through NVIDIA Sync or SSH tunneling. The dashboard is the easiest way to update system packages and firmware when working remotely. +The DGX Dashboard is a web application that runs locally on DGX Spark devices, providing a graphical interface for system updates, resource monitoring, and an integrated JupyterLab environment. Users can access the dashboard locally from the app launcher or remotely through NVIDIA Sync or SSH tunneling. The dashboard is the easiest way to update system packages and firmware when working remotely. ## What you'll accomplish -You will learn how to access and use the DGX Dashboard on your DGX Spark device. By the end of this walkthrough, you will be able to launch JupyterLab instances with pre-configured Python environments, monitor GPU performance, manage system updates and run a sample AI workload using Stable Diffusion. You'll understand multiple access methods including desktop shortcuts, NVIDIA Sync and manual SSH tunneling. +You will learn how to access and use the DGX Dashboard on your DGX Spark device. By the end of this walkthrough, you will be able to launch JupyterLab instances with pre-configured Python environments, monitor GPU performance, manage system updates, and run a sample AI workload using Stable Diffusion. You'll understand multiple access methods including desktop shortcuts, NVIDIA Sync, and manual SSH tunneling. ## What to know before starting @@ -27,7 +27,11 @@ You will learn how to access and use the DGX Dashboard on your DGX Spark device. ## Prerequisites -- DGX Spark device with Ubuntu Desktop environment +**Hardware Requirements:** +- NVIDIA Grace Blackwell GB10 Superchip System + +**Software Requirements:** +- NVIDIA DGX OS - NVIDIA Sync installed (for remote access method) or SSH client configured ## Ancillary files @@ -40,6 +44,8 @@ You will learn how to access and use the DGX Dashboard on your DGX Spark device. * **Duration:** 15-30 minutes for complete walkthrough including sample AI workload * **Risk level:** Low - Web interface operations with minimal system impact * **Rollback:** Stop JupyterLab instances through dashboard interface; no permanent system changes made during normal usage. +* **Last Updated:** 11/21/2025 + * Minor copyedits ## Instructions @@ -49,9 +55,9 @@ Choose one of the following methods to access the DGX Dashboard web interface: **Option A: Desktop shortcut (local access)** -If you have physical or remote desktop access to the Spark device: +If you have local access to your DGX Spark device: -1. Log into the Ubuntu Desktop environment on your Spark device +1. Log into the Ubuntu Desktop environment on your DGX Spark device 2. Open the Ubuntu app launcher by clicking on the bottom left corner of the screen 3. Click on the DGX Dashboard shortcut in the app launcher 4. The dashboard will open in your default web browser at `http://localhost:11000` @@ -61,7 +67,7 @@ If you have physical or remote desktop access to the Spark device: If you have NVIDIA Sync installed on your local machine: 1. Click the NVIDIA Sync icon in your system tray -2. Select your Spark device from the device list +2. Select your DGX Spark device from the device list 3. Click "Connect" 4. Click "DGX Dashboard" to launch the dashboard 5. The dashboard will open in your default web browser at `http://localhost:11000` using an automatic SSH tunnel @@ -70,23 +76,23 @@ Don't have NVIDIA Sync? [Install it here](/spark/connect-to-your-spark/sync) **Option C: Manual SSH tunnels** -For manual remote access without NVIDIA Sync you must first manually configure an SSH tunnel. +For manual remote access without NVIDIA Sync you must first [manually configure an SSH tunnel](/spark/connect-to-your-spark/manual-ssh). You must open a tunnel for the Dashboard server (port 11000) and for JupyterLab if you want to access it remotely. Each user account will have a different assigned port number for JupyterLab. -1. Check your assigned JupyterLab port by SSH-ing into the Spark device and running the following command: +1. Check your assigned JupyterLab port by SSH-ing into your DGX Spark and running the following command: ```bash cat /opt/nvidia/dgx-dashboard-service/jupyterlab_ports.yaml ``` -2. Look for your username and note the assigned port number +2. Look for your username and note the assigned port number. 3. Create a new SSH tunnel including both ports: ```bash ssh -L 11000:localhost:11000 -L :localhost: @ ``` -Replace `` with your Spark device username and `` with the device's IP address. +Replace `` with your DGX Spark device username and `` with the device's IP address. Replace `` with the port number from the YAML file. @@ -97,7 +103,7 @@ Open your web browser and navigate to `http://localhost:11000`. Once the dashboard loads in your browser: -1. Enter your Spark device system username in the username field +1. Enter your DGX Spark system username in the username field 2. Enter your system password in the password field 3. Click "Login" to access the dashboard interface @@ -109,8 +115,8 @@ Create and start a JupyterLab environment: 1. Click the "Start" button in the right panel 2. Monitor the status as it transitions through: Starting → Preparing → Running -3. Wait for the status to show "Running" (this may take several minutes on first launch) -4. Once "Running", if Jupyterlab does not automatically open in your browser (a pop-up was blocked), you can click the "Open In Browser" button +3. Wait for the status to show "Running" (this may take several minutes on the first launch) +4. Once "Running", if JupyterLab does not automatically open in your browser (a pop-up was blocked), you can click the "Open In Browser" button When starting, a default working directory (/home//jupyterlab) is created and a virtual environment is set up automatically. You can review the packages installed by looking at the `requirements.txt` file that is created in the working directory. @@ -204,11 +210,11 @@ From the Settings page, under the "Updates" tab: 3. Wait for the update to complete and your device to reboot > [!WARNING] -> System updates will upgrade packages, firmware if available, and trigger a reboot. Save your work before proceeding. +> System updates will upgrade packages, firmware (if available), and trigger a reboot. Save your work before proceeding. ## Step 7. Cleanup and rollback -To clean up resources and return system to original state: +To clean up resources and return your system to its original state: 1. Stop any running JupyterLab instances via dashboard 2. Delete the JupyterLab working directory @@ -233,3 +239,6 @@ Now that you have DGX Dashboard configured, you can: | JupyterLab won't start | Issue with current virtual environment | Change the working directory in the JupyterLab panel and start a new instance | | SSH tunnel connection refused | Incorrect IP or port | Verify Spark device IP and ensure SSH service is running | | GPU not visible in monitoring | Driver issues | Check GPU status with `nvidia-smi` | + + +For latest known issues, please review the [DGX Spark User Guide](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html). diff --git a/nvidia/rag-ai-workbench/README.md b/nvidia/rag-ai-workbench/README.md index f9ced93..b9f9200 100644 --- a/nvidia/rag-ai-workbench/README.md +++ b/nvidia/rag-ai-workbench/README.md @@ -37,7 +37,11 @@ architectures. ## Prerequisites -- DGX Spark system with NVIDIA AI Workbench installed or ready to install +**Hardware Requirements:** +- NVIDIA Grace Blackwell GB10 Superchip System + +**Software Requirements:** +- NVIDIA AI Workbench installed or ready to install - Free NVIDIA API key: Generate at [NGC API Keys](https://org.ngc.nvidia.com/setup/api-keys) - Free Tavily API key: Generate at [Tavily](https://tavily.com/) - Internet connection for cloning repositories and accessing APIs @@ -54,30 +58,28 @@ architectures. * **Estimated time:** 30-45 minutes (including AI Workbench installation if needed) * **Risk level:** Low - Uses pre-built containers and established APIs * **Rollback:** Simply delete the cloned project from AI Workbench to remove all components. No system changes are made outside the AI Workbench environment. +* **Last Updated:** 11/21/2025 + * Minor copyedits ## Instructions ## Step 1. Install NVIDIA AI Workbench -This step installs AI Workbench on your DGX Spark system and completes the initial setup wizard. +Install AI Workbench on your DGX Spark system and complete the initial setup wizard. -On your DGX Spark system, open the **NVIDIA AI Workbench** application and click **Begin Installation**. +On your DGX Spark, open the **NVIDIA AI Workbench** application and click "Begin Installation". 1. The installation wizard will prompt for authentication 2. Wait for the automated install to complete (several minutes) 3. Click "Let's Get Started" when installation finishes -**Troubleshooting installation issues** - -If you encounter the following error message, reboot your DGX system and then reopen NVIDIA AI Workbench: - -"An error occurred ... container tool failed to reach ready state. try again: docker is not running" +> [!NOTE] +> If you encounter the following error message, reboot your DGX Spark and then reopen NVIDIA AI Workbench: +> "An error occurred ... container tool failed to reach ready state. try again: docker is not running" ## Step 2. Verify API key requirements -This step ensures you have the required API keys before proceeding with the project setup. - -Verify you have both required API keys. Keep these keys safe! +Next, you should ensure you have both required API keys before proceeding with the project setup. Keep these keys safe! * Tavily API Key: https://tavily.com/ * NVIDIA API Key: https://org.ngc.nvidia.com/setup/api-keys @@ -87,21 +89,21 @@ Keep both keys available for the next step. ## Step 3. Clone the agentic RAG project -This step clones the pre-built agentic RAG project from GitHub into your AI Workbench environment. +You'll then clone the pre-built agentic RAG project from GitHub into your AI Workbench environment. -From the AI Workbench landing page, select the **Local** location if not done so already, then click **Clone Project** from the top right corner. +From the AI Workbench landing page, select the **Local** location, if not done so already, then click "Clone Project" from the top right corner. Paste this Git repository URL in the clone dialog: https://github.com/NVIDIA/workbench-example-agentic-rag -Click **Clone** to begin the clone and build process. +Click "Clone" to begin the clone and build process. ## Step 4. Configure project secrets -This step configures the API keys required for the agentic RAG application to function properly. +You can then configure the API keys required for the agentic RAG application to function properly. While the project builds, configure the API keys using the yellow warning banner that appears: -1. Click **Configure** in the yellow banner +1. Click "Configure" in the yellow banner 2. Enter your ``NVIDIA_API_KEY`` 3. Enter your ``TAVILY_API_KEY`` 4. Save the configuration @@ -110,7 +112,7 @@ Wait for the project build to complete before proceeding. ## Step 5. Launch the chat application -This step starts the web-based chat interface where you can interact with the agentic RAG system. +You can now start the web-based chat interface where you can interact with the agentic RAG system. Navigate to **Environment** > **Project Container** > **Apps** > **Chat** and start the web application. @@ -118,7 +120,7 @@ A browser window will open automatically and load with the Gradio chat interface ## Step 6. Test the basic functionality -This step verifies the agentic RAG system is working by submitting a sample query. +Verify the agentic RAG system is working by submitting a sample query. In the chat application, click on or type a sample query such as: `How do I add an integration in the CLI?` @@ -126,20 +128,18 @@ Wait for the agentic system to process and respond. The response, while general, ## Step 7. Validate project -This step confirms the complete setup is working correctly by testing the core features. +Confirm your setup is working correctly by testing the core features. Verify the following components are functioning: -```bash -✓ Web application loads without errors -✓ Sample queries return responses -✓ No API authentication errors appear -✓ The agentic reasoning process is visible in the interface under "Monitor" -``` +* Web application loads without errors +* Sample queries return responses +* No API authentication errors appear +* The agentic reasoning process is visible in the interface under "Monitor" ## Step 8. Complete optional quickstart -This step demonstrates advanced features by uploading data, retrieving context, and testing custom queries. +You can evaluate advanced features by uploading data, retrieving context, and testing custom queries. **Substep A: Upload sample dataset** Complete the in-app quickstart instructions to upload the sample dataset and test improved RAG-based responses. @@ -149,7 +149,7 @@ Upload a custom dataset, adjust the Router prompt, and submit custom queries to ## Step 10. Cleanup and rollback -This step explains how to remove the project if needed and what changes were made to your system. +You can remove the project if needed. > [!WARNING] > This will permanently delete the project and all associated data. @@ -160,13 +160,12 @@ To remove the project completely: 2. Select "Delete Project" 3. Confirm deletion when prompted -**Rollback notes:** All changes are contained within AI Workbench. No system-level modifications were made outside the AI Workbench environment. +> [!NOTE] +> All changes are contained within AI Workbench. No system-level modifications were made outside the AI Workbench environment. ## Step 11. Next steps -This step provides guidance on further exploration and development with the agentic RAG system. - -Explore advanced features: +You can also explore further advanced features and development options with the agentic RAG system: * Modify component prompts in the project code * Upload different documents to test routing and customization @@ -183,3 +182,6 @@ Consider customizing the Gradio UI or integrating the agentic RAG components int | 401 Unauthorized | Wrong or malformed API key | Replace key in Project Secrets and restart | | 403 Unauthorized | API key lacks permissions | Generate new key with proper access | | Agentic loop timeout | Complex query exceeding time limit | Try simpler query or retry | + + +For latest known issues, please review the [DGX Spark User Guide](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html). diff --git a/nvidia/tailscale/README.md b/nvidia/tailscale/README.md index f36ac6d..386761d 100644 --- a/nvidia/tailscale/README.md +++ b/nvidia/tailscale/README.md @@ -9,9 +9,9 @@ - [Instructions](#instructions) - [Step 1. Verify system requirements](#step-1-verify-system-requirements) - [Step 2. Install SSH server (if needed)](#step-2-install-ssh-server-if-needed) - - [Step 3. Install Tailscale on NVIDIA Spark](#step-3-install-tailscale-on-nvidia-spark) + - [Step 3. Install Tailscale on NVIDIA DGX Spark](#step-3-install-tailscale-on-nvidia-dgx-spark) - [Step 4. Verify Tailscale installation](#step-4-verify-tailscale-installation) - - [Step 5. Connect Spark device to Tailscale network](#step-5-connect-spark-device-to-tailscale-network) + - [Step 5. Connect your DGX Spark to Tailscale network](#step-5-connect-your-dgx-spark-to-tailscale-network) - [Step 6. Install Tailscale on client devices](#step-6-install-tailscale-on-client-devices) - [Step 7. Connect client devices to tailnet](#step-7-connect-client-devices-to-tailnet) - [Step 8. Verify network connectivity](#step-8-verify-network-connectivity) @@ -29,17 +29,17 @@ ## Basic idea Tailscale creates an encrypted peer-to-peer mesh network that allows secure access -to your NVIDIA Spark device from anywhere without complex firewall configurations -or port forwarding. By installing Tailscale on both your Spark and client devices, +to your NVIDIA DGX Spark device from anywhere without complex firewall configurations +or port forwarding. By installing Tailscale on both your DGX Spark and client devices, you establish a private "tailnet" where each device gets a stable private IP address and hostname, enabling seamless SSH access whether you're at home, work, or a coffee shop. ## What you'll accomplish -You will set up Tailscale on your NVIDIA Spark device and client machines to +You will set up Tailscale on your DGX Spark device and client machines to create secure remote access. After completion, you'll be able to SSH into your -Spark from anywhere using simple commands like `ssh user@spark-hostname`, with +DGX Spark from anywhere using simple commands like `ssh user@spark-hostname`, with all traffic automatically encrypted and NAT traversal handled transparently. ## What to know before starting @@ -52,30 +52,36 @@ all traffic automatically encrypted and NAT traversal handled transparently. ## Prerequisites -- NVIDIA Spark device running DGX OS (ARM64/AArch64) +**Hardware Requirements:** +- NVIDIA Grace Blackwell GB10 Superchip System + +**Software Requirements:** +- NVIDIA DGX OS - Client device (Mac, Windows, or Linux) for remote access - Client device and DGX Spark not on the same network when testing connectivity - Internet connectivity on both devices - Valid email account for Tailscale authentication (Google, GitHub, Microsoft) - SSH server availability check: `systemctl status ssh` - Package manager working: `sudo apt update` -- User account with sudo privileges on Spark device +- User account with sudo privileges on your DGX Spark device ## Time & risk * **Duration**: 15-30 minutes for initial setup, 5 minutes per additional device -* **Risks**: +* **Risks**: Medium * Potential SSH service configuration conflicts * Network connectivity issues during initial setup * Authentication provider service dependencies * **Rollback**: Tailscale can be completely removed with `sudo apt remove tailscale` and all network routing automatically reverts to default settings. +* **Last Updated:** 11/21/2025 + * Minor copyedits ## Instructions ### Step 1. Verify system requirements -Check that your NVIDIA Spark device is running a supported Ubuntu version and -has internet connectivity. This step runs on the Spark device to confirm +Check that your NVIDIA DGX Spark device is running a supported Ubuntu version and +has internet connectivity. This step runs on the DGX Spark device to confirm prerequisites. ```bash @@ -91,9 +97,9 @@ sudo whoami ### Step 2. Install SSH server (if needed) -Ensure SSH server is running on your Spark device since Tailscale provides +Ensure SSH server is running on your DGX Spark device since Tailscale provides network connectivity but requires SSH for remote access. This step runs on -the Spark device. +the DGX Spark device. ```bash ## Check if SSH is running @@ -114,9 +120,9 @@ sudo systemctl enable ssh --now --no-pager systemctl status ssh --no-pager ``` -### Step 3. Install Tailscale on NVIDIA Spark +### Step 3. Install Tailscale on NVIDIA DGX Spark -Install Tailscale on your ARM64 Spark device using the official Ubuntu +Install Tailscale on your DGX Spark using the official Ubuntu repository. This step adds the Tailscale package repository and installs the client. @@ -144,7 +150,7 @@ sudo apt install -y tailscale ### Step 4. Verify Tailscale installation -Confirm Tailscale installed correctly on your Spark device before proceeding +Confirm Tailscale installed correctly on your DGX Spark device before proceeding with authentication. ```bash @@ -155,24 +161,24 @@ tailscale version sudo systemctl status tailscaled --no-pager ``` -### Step 5. Connect Spark device to Tailscale network +### Step 5. Connect your DGX Spark to Tailscale network -Authenticate your Spark device with Tailscale using your chosen identity +Authenticate your DGX Spark device with Tailscale using your chosen identity provider. This creates your private tailnet and assigns a stable IP address. ```bash ## Start Tailscale and begin authentication sudo tailscale up -## Follow the URL displayed to complete login in browser +## Follow the URL displayed to complete login in your browser ## Choose from: Google, GitHub, Microsoft, or other supported providers ``` ### Step 6. Install Tailscale on client devices -Install Tailscale on the devices you'll use to connect to your Spark remotely. +Install Tailscale on the devices you'll use to connect to your DGX Spark remotely. -Choose the appropriate method for your client operating system. +Choose the appropriate method for your client operating system: **On macOS:** - Option 1: Install from Mac App Store by searching for "Tailscale" and then clicking Get → Install @@ -187,7 +193,7 @@ Choose the appropriate method for your client operating system. **On Linux:** -Use the same instructions as were done for installing on your DGX Spark. +Follow the same instructions used for the DGX Spark installation. ```bash ## Update package list @@ -214,12 +220,12 @@ sudo apt install -y tailscale ### Step 7. Connect client devices to tailnet Log in to Tailscale on each client device using the same identity provider -account you used for the Spark device. +account you used for your DGX Spark. **On macOS/Windows (GUI):** - Launch Tailscale app - Click "Log in" button -- Sign in with same account used on Spark +- Sign in with same account used on DGX Spark **On Linux (CLI):** @@ -247,8 +253,8 @@ tailscale ping ### Step 9. Configure SSH authentication -Set up SSH key authentication for secure access to your Spark device. This -step runs on your client device and Spark device. +Set up SSH key authentication for secure access to your DGX Spark. This +step runs on your client device and DGX Spark device. **Generate SSH key on client (if not already done):** @@ -260,7 +266,7 @@ ssh-keygen -t ed25519 -f ~/.ssh/tailscale_spark cat ~/.ssh/tailscale_spark.pub ``` -**Add public key to Spark device:** +**Add public key to DGX Spark:** ```bash ## On Spark device, add client's public key @@ -273,7 +279,7 @@ chmod 700 ~/.ssh ### Step 10. Test SSH connection -Connect to your Spark device using SSH over the Tailscale network to verify +Connect to your DGX Spark using SSH over the Tailscale network to verify the complete setup works. ```bash @@ -316,7 +322,7 @@ Remove Tailscale completely if needed. This will disconnect devices from the tailnet and remove all network configurations. > [!WARNING] -> his will permanently remove the device from your Tailscale network and require re-authentication to rejoin. +> This will permanently remove the device from your Tailscale network and require re-authentication to rejoin. ```bash ## Stop Tailscale service @@ -339,7 +345,7 @@ To restore: Re-run installation steps 3-5. Your Tailscale setup is complete. You can now: -- Access your Spark device from any network with: `ssh @` +- Access your DGX Spark device from any network with: `ssh @` - Transfer files securely: `scp file.txt @:~/` - Open the DGX Dashboard and start JupyterLab, then connect with: `ssh -L 8888:localhost:1102 @` @@ -353,3 +359,6 @@ Your Tailscale setup is complete. You can now: | SSH auth failure | Wrong SSH keys | Check public key in `~/.ssh/authorized_keys` | | Cannot ping hostname | DNS issues | Use IP from `tailscale status` instead | | Devices missing | Different accounts | Use same identity provider for all devices | + + +For latest known issues, please review the [DGX Spark User Guide](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html). diff --git a/nvidia/trt-llm/assets/docker-compose.yml b/nvidia/trt-llm/assets/docker-compose.yml index e6239ed..56a29c0 100644 --- a/nvidia/trt-llm/assets/docker-compose.yml +++ b/nvidia/trt-llm/assets/docker-compose.yml @@ -32,6 +32,8 @@ services: ulimits: memlock: -1 stack: 67108864 + devices: + - /dev/infiniband:/dev/infiniband networks: - host healthcheck: diff --git a/nvidia/vscode/README.md b/nvidia/vscode/README.md index 61bb401..353eada 100644 --- a/nvidia/vscode/README.md +++ b/nvidia/vscode/README.md @@ -5,7 +5,7 @@ ## Table of Contents - [Overview](#overview) -- [Instructions](#instructions) +- [Direct Installation](#direct-installation) - [Access with NVIDIA Sync](#access-with-nvidia-sync) - [Troubleshooting](#troubleshooting) @@ -14,44 +14,44 @@ ## Overview ## Basic idea -This walkthrough establishes a local Visual Studio Code development environment directly on DGX Spark devices. By installing VS Code natively on the ARM64-based Spark system, you gain access to a full-featured IDE with extensions, integrated terminal, and Git integration while leveraging the specialized hardware for development and testing. +This walkthrough will help you set up Visual Studio Code, a full-featured IDE with extensions, an integrated terminal, and Git integration, while leveraging your DGX Spark device for development and testing. There are two different approaches for using VS Code: + + * **Direct Installation**: Install the VS Code development environment directly on your ARM64-based Spark system for local development on the target hardware without remote development overhead. + + * **Access with NVIDIA Sync**: Set up NVIDIA Sync to remotely connect to Spark over SSH and configure VS Code as one of your development tools. ## What you'll accomplish -You will have Visual Studio Code running natively on your DGX Spark device with access to the system's ARM64 architecture and GPU resources. This setup enables direct code development, debugging, and execution on the target hardware without remote development overhead. +You will have VS Code set up for development on your DGX Spark device with access to the system's ARM64 architecture and GPU resources. This setup enables direct code development, debugging, and execution. ## What to know before starting +You should have basic experience working with the VS Code interface and features; the approach you choose will require some additional understanding: -• Basic experience working with Visual Studio Code interface and features +* **Direct Installation**: + * Familiarity with package management on Linux systems + * Understanding of file permissions and authentication on Linux -• Familiarity with package management on Linux systems - -• Understanding of file permissions and authentication on Linux +* **Access with NVIDIA Sync**: + * Familiarity with SSH concepts ## Prerequisites +Your DGX Spark [device is set up](https://docs.nvidia.com/dgx/dgx-spark/first-boot.html). You will also need the following: -• DGX Spark device with administrative privileges - -• Active internet connection for downloading the VS Code installer - -• Verify ARM64 architecture: - ```bash - uname -m -# # Expected output: aarch64 - ``` -• Verify GUI desktop environment available: - ```bash - echo $DISPLAY -# # Should return display information like :0 or :10.0 - ``` +* **Direct Installation**: + * DGX Spark set up with administrative privileges + * Active internet connection for downloading the VS Code installer +* **Access with NVIDIA Sync**: + * VS Code installed on your laptop, downloaded from https://code.visualstudio.com/download. ## Time & risk * **Duration:** 10-15 minutes * **Risk level:** Low - installation uses official packages with standard rollback * **Rollback:** Standard package removal via system package manager +* **Last Updated:** 11/21/2025 + * Clarify options and minor copyedits -## Instructions +## Direct Installation ## Step 1. Verify system requirements @@ -60,12 +60,17 @@ Before installing VS Code, confirm your DGX Spark system meets the requirements ```bash ## Verify ARM64 architecture uname -m +## Expected output: aarch64 ## Check available disk space (VS Code requires ~200MB) df -h / ## Verify desktop environment is running ps aux | grep -E "(gnome|kde|xfce)" + +## Verify GUI desktop environment is available + echo $DISPLAY +## Should return display information like :0 or :10.0 ``` ## Step 2. Download VS Code ARM64 installer @@ -180,14 +185,14 @@ NVIDIA Sync will automatically configure SSH key-based authentication for secure - Click the NVIDIA Sync icon in your system tray/taskbar - Ensure your device is connected (click "Connect" if needed) -- Click on "VS Code" to launch it with an automatic SSH connection to your Spark -- Wait for the remote connection to be established (may ask your local machine for a password or to authorize the connection) -- It may prompt you to "trust the authors of the files in this folder" when you first land in the home directory after a successful SSH connection +- Click on "VS Code" to launch it with an automatic SSH connection to your DGX Spark +- Wait for the remote connection to be established (your local machine may ask for a password or to authorize the connection) +- You may be prompted to "trust the authors of the files in this folder" when you first land in the home directory after a successful SSH connection ## Step 3. Validation and follow-ups -- Verify that you can access your Spark's filesystem with VS Code as a text editor -- Open the integrated terminal in VS Code and run test commands like `hostnamectl` and `whoami` to ensure you are remotely accessing your Spark +- Verify that you can access your DGX Spark's filesystem with VS Code as a text editor +- Open the integrated terminal in VS Code and run test commands like `hostnamectl` and `whoami` to ensure you are remotely accessing your DGX Spark - Navigate to a specific file path or directory and start editing/writing files - Install VS Code extensions for your development workflow (Python, Docker, GitLens, etc.) - Clone repositories from GitHub or other version control systems @@ -200,3 +205,6 @@ NVIDIA Sync will automatically configure SSH key-based authentication for secure | `dpkg: dependency problems` during install | Missing dependencies | Run `sudo apt-get install -f` | | VS Code won't launch with GUI error | No display server/X11 | Verify GUI desktop is running: `echo $DISPLAY` | | Extensions fail to install | Network connectivity or ARM64 compatibility | Check internet connection, verify extension ARM64 support | + + +For latest known issues, please review the [DGX Spark User Guide](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html).