chore: Regenerate all playbooks

This commit is contained in:
GitLab CI 2025-10-08 14:29:01 +00:00
parent db03c60817
commit 81b4535418

View File

@ -25,7 +25,7 @@
## Overview
## Basic Idea
## Basic idea
Tailscale creates an encrypted peer-to-peer mesh network that allows secure access
to your NVIDIA Spark device from anywhere without complex firewall configurations
@ -51,8 +51,9 @@ all traffic automatically encrypted and NAT traversal handled transparently.
## Prerequisites
- NVIDIA Spark device running Ubuntu (ARM64/AArch64)
- NVIDIA Spark device running DGX OS (ARM64/AArch64)
- Client device (Mac, Windows, or Linux) for remote access
- Client device and DGX Spark not on the same network when testing connectivity
- Internet connectivity on both devices
- Valid email account for Tailscale authentication (Google, GitHub, Microsoft)
- SSH server availability check: `systemctl status ssh`
@ -61,7 +62,7 @@ all traffic automatically encrypted and NAT traversal handled transparently.
## Time & risk
**Time estimate**: 15-30 minutes for initial setup, 5 minutes per additional device
**Duration**: 15-30 minutes for initial setup, 5 minutes per additional device
**Risks**:
- Potential SSH service configuration conflicts
@ -98,10 +99,10 @@ the Spark device.
```bash
## Check if SSH is running
systemctl status ssh
systemctl status ssh --no-pager
```
#### If SSH is not installed or running
**If SSH is not installed or running:**
```bash
## Install OpenSSH server
@ -109,10 +110,10 @@ sudo apt update
sudo apt install -y openssh-server
## Enable and start SSH service
sudo systemctl enable ssh --now
sudo systemctl enable ssh --now --no-pager
## Verify SSH is running
systemctl status ssh
systemctl status ssh --no-pager
```
### Step 3. Install Tailscale on NVIDIA Spark
@ -153,7 +154,7 @@ with authentication.
tailscale version
## Check Tailscale service status
sudo systemctl status tailscaled
sudo systemctl status tailscaled --no-pager
```
### Step 5. Connect Spark device to Tailscale network
@ -172,9 +173,10 @@ sudo tailscale up
### Step 6. Install Tailscale on client devices
Install Tailscale on the devices you'll use to connect to your Spark remotely.
Choose the appropriate method for your client operating system.
#### On macOS
**On macOS:**
```bash
## Option 1: Install from Mac App Store
@ -184,7 +186,7 @@ Choose the appropriate method for your client operating system.
## Visit https://tailscale.com/download and download .pkg installer
```
#### On Windows
**On Windows:**
```bash
## Download installer from https://tailscale.com/download
@ -192,7 +194,7 @@ Choose the appropriate method for your client operating system.
## Launch Tailscale from Start Menu or system tray
```
#### On Linux
**On Linux:**
```bash
## Use same installation steps as Spark device (Steps 3-4)
@ -204,12 +206,13 @@ Choose the appropriate method for your client operating system.
Log in to Tailscale on each client device using the same identity provider
account you used for the Spark device.
#### On macOS/Windows (GUI)
**On macOS/Windows (GUI):**
- Launch Tailscale app
- Click "Log in" button
- Sign in with same account used on Spark
#### On Linux (CLI)
**On Linux (CLI):**
```bash
## Start Tailscale on client
sudo tailscale up
@ -237,7 +240,7 @@ tailscale ping <SPARK_HOSTNAME>
Set up SSH key authentication for secure access to your Spark device. This
step runs on your client device and Spark device.
#### Generate SSH key on client (if not already done)
**Generate SSH key on client (if not already done):**
```bash
## Generate new SSH key pair
@ -247,7 +250,7 @@ ssh-keygen -t ed25519 -f ~/.ssh/tailscale_spark
cat ~/.ssh/tailscale_spark.pub
```
#### Add public key to Spark device
**Add public key to Spark device:**
```bash
## On Spark device, add client's public key
@ -282,6 +285,9 @@ Verify that Tailscale is working correctly and your SSH connection is stable.
## From client device, check connection status
tailscale status
## Create a test file on the client device
echo "test file for the spark" > test.txt
## Test file transfer over SSH
scp -i ~/.ssh/tailscale_spark test.txt <USERNAME>@<SPARK_HOSTNAME>:~/
@ -289,7 +295,7 @@ scp -i ~/.ssh/tailscale_spark test.txt <USERNAME>@<SPARK_HOSTNAME>:~/
ssh -i ~/.ssh/tailscale_spark <USERNAME>@<SPARK_HOSTNAME> 'nvidia-smi'
```
Expected output should show:
Expected output:
- Tailscale status displaying both devices as "active"
- Successful file transfers
- Remote command execution working
@ -301,7 +307,7 @@ Common issues and their solutions:
| Symptom | Cause | Fix |
|---------|-------|-----|
| `tailscale up` auth fails | Network issues | Check internet, try `curl -I login.tailscale.com` |
| SSH connection refused | SSH not running | Run `sudo systemctl start ssh` on Spark |
| SSH connection refused | SSH not running | Run `sudo systemctl start ssh --no-pager` on Spark |
| SSH auth failure | Wrong SSH keys | Check public key in `~/.ssh/authorized_keys` |
| Cannot ping hostname | DNS issues | Use IP from `tailscale status` instead |
| Devices missing | Different accounts | Use same identity provider for all devices |
@ -337,5 +343,5 @@ Your Tailscale setup is complete. You can now:
- Access your Spark device from any network with: `ssh <USERNAME>@<SPARK_HOSTNAME>`
- Transfer files securely: `scp file.txt <USERNAME>@<SPARK_HOSTNAME>:~/`
- Run Jupyter notebooks remotely by SSH tunneling:
`ssh -L 8888:localhost:8888 <USERNAME>@<SPARK_HOSTNAME>`
- Open the DGX Dashboard and start JupyterLab, then connect with:
`ssh -L 8888:localhost:1102 <USERNAME>@<SPARK_HOSTNAME>`