mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-28 12:43:52 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
2022e2b24b
commit
90fe8c7cae
@ -28,6 +28,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
|||||||
- [CUDA-X Data Science](nvidia/cuda-x-data-science/)
|
- [CUDA-X Data Science](nvidia/cuda-x-data-science/)
|
||||||
- [DGX Dashboard](nvidia/dgx-dashboard/)
|
- [DGX Dashboard](nvidia/dgx-dashboard/)
|
||||||
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
|
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
|
||||||
|
- [Develop and Deploy Healthcare Robots with Isaac For Healthcare](nvidia/i4h-so-arm/)
|
||||||
- [Install and Use Isaac Sim and Isaac Lab](nvidia/isaac/)
|
- [Install and Use Isaac Sim and Isaac Lab](nvidia/isaac/)
|
||||||
- [Optimized JAX](nvidia/jax/)
|
- [Optimized JAX](nvidia/jax/)
|
||||||
- [Live VLM WebUI](nvidia/live-vlm-webui/)
|
- [Live VLM WebUI](nvidia/live-vlm-webui/)
|
||||||
|
|||||||
488
nvidia/i4h-so-arm/README.md
Normal file
488
nvidia/i4h-so-arm/README.md
Normal file
@ -0,0 +1,488 @@
|
|||||||
|
# Develop and Deploy Healthcare Robots with Isaac For Healthcare
|
||||||
|
|
||||||
|
> End-to-end development and deployment of healthcare robots on DGX Spark
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Overview](#overview)
|
||||||
|
- [Part 1: Preparation](#part-1-preparation)
|
||||||
|
- [Set Up Conda Environment](#set-up-conda-environment)
|
||||||
|
- [Set Up Docker Environment](#set-up-docker-environment)
|
||||||
|
- [Set Up the Scene](#set-up-the-scene)
|
||||||
|
- [Calibrate the Robot](#calibrate-the-robot)
|
||||||
|
- [Test Teleoperation](#test-teleoperation)
|
||||||
|
- [Part 2: Synthetic Data Generation](#part-2-synthetic-data-generation)
|
||||||
|
- [Part 3: Real-World Data Collection](#part-3-real-world-data-collection)
|
||||||
|
- [Part 4: GR00T N1.5 Fine-Tuning](#part-4-gr00t-n15-fine-tuning)
|
||||||
|
- [Part 5: Deploying Trained Robotic Policy](#part-5-deploying-trained-robotic-policy)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
## Basic idea
|
||||||
|
|
||||||
|
Robotics and physical AI are driving the next wave of AI breakthroughs. Developing physical AI requires [3 computers](https://blogs.nvidia.com/blog/three-computers-robotics/) — 1. A simulation computer to generate synthetic data and digital twins, bridging the data gap. 2. A training computer to build the necessary foundation and world models. 3. A runtime computer to handle real-time robotic inference and intelligent interactions.
|
||||||
|
|
||||||
|
This tutorial demonstrates the development and deployment of an autonomous healthcare robot using [NVIDIA Isaac For Healthcare](https://developer.nvidia.com/blog/introducing-nvidia-isaac-for-healthcare-an-ai-powered-medical-robotics-development-platform/) on a single [DGX Spark](https://www.nvidia.com/en-us/products/workstations/dgx-spark/), consolidating the 3-computers developer workflow onto one hardware platform. The example focuses on the [SO-101 robot](https://github.com/TheRobotStudio/SO-ARM100?tab=readme-ov-file) acting as a scrub nurse—a specialized nursing professional working directly in the sterile field during surgical procedures—to perform a crucial pick-and-place task — autonomously picking up a pair of surgical scissors and placing them into a surgical tray.
|
||||||
|
|
||||||
|
## What you'll accomplish
|
||||||
|
|
||||||
|
You'll complete the full development lifecycle of an autonomous healthcare robot on DGX Spark, covering the following stages:
|
||||||
|
|
||||||
|
- **Part 1 — Preparation.** Set up the hardware, software environments, and task environment.
|
||||||
|
- **Part 2 — Generating synthetic data with Isaac Sim.** Collect synthetic pick-and-place demonstrations using teleoperation in a simulated environment.
|
||||||
|
- **Part 3 — Collecting real-world data.** Collect real-world teleoperation data with the physical SO-101 robot.
|
||||||
|
- **Part 4 — Fine-tuning the GR00T N1.5 model.** Fine-tune a pretrained GR00T N1.5 model using the collected data.
|
||||||
|
- **Part 5 — Deploying trained robotic policy.** Deploy the fine-tuned model in both simulated and real-world environments.
|
||||||
|
|
||||||
|
## What to know before starting
|
||||||
|
|
||||||
|
- Experience with Linux command line
|
||||||
|
- Basic understanding of Docker containers
|
||||||
|
- Familiarity with Python and conda environments
|
||||||
|
- Basic knowledge of robotics concepts (teleoperation, calibration)
|
||||||
|
- Familiarity with machine learning concepts (helpful but not required)
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
**Hardware Requirements:**
|
||||||
|
- [NVIDIA DGX Spark](https://www.nvidia.com/en-us/products/workstations/dgx-spark/) with FastOS version 1.91.+ (verify with `cat /etc/fastos-release`; upgrade if necessary following [steps here](https://docs.nvidia.com/dgx/dgx-spark/system-recovery.html#recovery-process-steps))
|
||||||
|
- [SO-101 Robot](https://github.com/TheRobotStudio/SO-ARM100?tab=readme-ov-file) with both leader & follower arms and wrist camera module (ensure mounting/fixation tools are included or acquired separately)
|
||||||
|
- USB-C splitter (needed since 4 USB connections are required and DGX Spark has only 3 available USB-C ports; use a high-quality splitter to minimize latency)
|
||||||
|
- OpenCV compatible USB web camera (for the room camera)
|
||||||
|
- Surgical tray (dimensions 24cm x 16cm x 5cm)
|
||||||
|
- Surgical scissors (length 18cm)
|
||||||
|
- Scene setup accessories — table, table cloth, and a camera stand/holder for the room camera
|
||||||
|
|
||||||
|
**Software Requirements:**
|
||||||
|
- NVIDIA DGX OS
|
||||||
|
- Miniconda: [installation guidelines](https://www.anaconda.com/docs/getting-started/miniconda/install#aws-graviton2%2Farm64)
|
||||||
|
- Docker (pre-installed on DGX OS)
|
||||||
|
|
||||||
|
## Ancillary files
|
||||||
|
|
||||||
|
All required assets can be found in the [NVIDIA Isaac-For-Healthcare-Workflows repository](https://github.com/isaac-for-healthcare/i4h-workflows).
|
||||||
|
|
||||||
|
- `workflows/so_arm_starter/` - Source code for the robotic scrub nurse example workflow
|
||||||
|
- `tools/env_setup_so_arm_starter.sh` - Environment setup script for the conda environment
|
||||||
|
- `workflows/so_arm_starter/docker/dgx.Dockerfile` - Dockerfile for the Docker environment
|
||||||
|
|
||||||
|
## Time & risk
|
||||||
|
|
||||||
|
* **Estimated time:** Approximately 2 days (GR00T N1.5 fine-tuning at 30,000 steps takes around 24 hours on DGX Spark; data collection and other setup steps require several additional hours)
|
||||||
|
* **Risk level:** Medium
|
||||||
|
* Robot calibration must remain consistent throughout the tutorial; re-calibrating after data collection or training may require restarting the entire process
|
||||||
|
* Large downloads and Docker builds may take significant time
|
||||||
|
* Leader and follower arm power cords have different voltages—do not mix them up
|
||||||
|
* **Rollback:** Conda environment and Docker image can be removed to revert software changes. Collected datasets can be deleted from `~/.cache/huggingface/lerobot/`.
|
||||||
|
|
||||||
|
## Part 1: Preparation
|
||||||
|
|
||||||
|
## Step 1. Prepare Hardware and Accessories
|
||||||
|
|
||||||
|
Required components:
|
||||||
|
|
||||||
|
* [**NVIDIA DGX Spark**](https://www.nvidia.com/en-us/products/workstations/dgx-spark/) — Verify that FastOS version is 1.91.+ with `cat /etc/fastos-release`; upgrade if necessary following [steps here](https://docs.nvidia.com/dgx/dgx-spark/system-recovery.html#recovery-process-steps).
|
||||||
|
* [**SO-101 Robot**](https://github.com/TheRobotStudio/SO-ARM100?tab=readme-ov-file) — Requires both leader & follower arms with wrist camera module. Ensure mounting/fixation tools are included or acquired separately.
|
||||||
|
* **USB-C Splitter** — Needed since 4 USB connections (2 USB-C for arms, 2 USB-A for cameras) are required and DGX Spark has only 3 available USB-C ports. Use a high-quality splitter to minimize latency.
|
||||||
|
* **OpenCV compatible USB web camera** — For the room camera.
|
||||||
|
* **Surgical Tray** — Dimensions 24cm x 16cm x 5cm.
|
||||||
|
* **Surgical Scissors** — Length 18cm.
|
||||||
|
* **Scene Setup Accessories** — Table, table cloth, and a camera stand/holder for the room camera.
|
||||||
|
|
||||||
|
## Step 2. Set Up Software Environments
|
||||||
|
|
||||||
|
Power on DGX Spark and open a terminal window.
|
||||||
|
|
||||||
|
Create a folder named `workspace` under your home directory, and clone the NVIDIA Isaac-For-Healthcare-Workflows repository `i4h-workflows` from GitHub:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
mkdir ~/workspace
|
||||||
|
cd ~/workspace && git clone https://github.com/isaac-for-healthcare/i4h-workflows.git
|
||||||
|
```
|
||||||
|
|
||||||
|
The source code for several Isaac For Healthcare example workflows is in this repository, including the robotic scrub nurse example at `<path-to-i4h-workflows>/workflows/so_arm_starter`.
|
||||||
|
|
||||||
|
This tutorial requires two separate software environments on DGX Spark:
|
||||||
|
|
||||||
|
1. A conda environment for most of the tasks.
|
||||||
|
2. A docker environment for all tasks that require Isaac-GR00T.
|
||||||
|
|
||||||
|
A separate docker environment was needed primarily because of the complexity in installing certain Isaac-GR00T dependencies, like `flash_attn`, on the DGX Spark's native arm64 OS.
|
||||||
|
|
||||||
|
### Set Up Conda Environment
|
||||||
|
|
||||||
|
First, ensure Miniconda is installed on DGX Spark. If not, follow the [installation guidelines here](https://www.anaconda.com/docs/getting-started/miniconda/install#aws-graviton2%2Farm64). Then, create a new conda environment and install the necessary dependencies for this tutorial:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda create -n so_arm_starter python=3.11 -y
|
||||||
|
conda activate so_arm_starter
|
||||||
|
cd <path-to-i4h-workflows> && bash tools/env_setup_so_arm_starter.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
Installation takes about 20 minutes and, when complete, prints a success message to the terminal.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
==========================================
|
||||||
|
Environment setup script finished.
|
||||||
|
==========================================
|
||||||
|
```
|
||||||
|
|
||||||
|
After installation, **deactivate and reactivate the `so_arm_starter` environment** to apply configurations:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda deactivate
|
||||||
|
conda activate so_arm_starter
|
||||||
|
```
|
||||||
|
|
||||||
|
After reactivating the conda environment, set the following environment variable:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export PYTHONPATH=<path-to-i4h-workflows>/workflows/so_arm_starter/scripts
|
||||||
|
```
|
||||||
|
|
||||||
|
To avoid manually setting the environment variable each time you activate `so_arm_starter`, optionally add the command to `~/.bashrc`. Source the file immediately after adding it to activate it in the current session.
|
||||||
|
|
||||||
|
### Set Up Docker Environment
|
||||||
|
|
||||||
|
To set up the docker environment, build a docker image using the `dgx.Dockerfile` provided under `<path-to-i4h-workflows>/workflows/so_arm_starter/docker`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd <path-to-i4h-workflows>/workflows/so_arm_starter/docker
|
||||||
|
docker build -t soarm-dgx -f dgx.Dockerfile .
|
||||||
|
```
|
||||||
|
|
||||||
|
The build takes about 20 minutes, creating a docker image named `soarm-dgx`.
|
||||||
|
|
||||||
|
## Step 3. Set Up the Task Environment
|
||||||
|
|
||||||
|
### Set Up the Scene
|
||||||
|
|
||||||
|
To set up the scrub nurse pick-and-place scene:
|
||||||
|
|
||||||
|
1. **Mount Arms:** Firmly mount the follower arm on the table and the leader arm nearby for comfortable teleoperation.
|
||||||
|
2. **Set Scene:** Place the table cloth, surgical tray, and scissors on the table. Use a non-reflective, dark table cloth to minimize reflections and maintain consistent background color. Fixate the table cloth to the table to prevent movement when the follower's gripper touches it. Ensure the tray and scissors are within easy reach of the follower arm's gripper.
|
||||||
|
3. **Mount Camera:** Mount the room camera above the table for a top-down view. While other positions (like a side-view) might offer better object localization, the top-down view minimizes environmental elements, focusing only on task-relevant objects for a more robust setup.
|
||||||
|
|
||||||
|
To finally adjust the table and room camera stand for optimal wrist and room camera views, power on the robot and cameras. Connect the following to the DGX Spark:
|
||||||
|
|
||||||
|
* Leader and follower arms (2x USB-C)
|
||||||
|
* Wrist camera (1x USB-A)
|
||||||
|
* Room camera (1x USB-A or USB-C)
|
||||||
|
|
||||||
|
Due to limited DGX Spark USB-C ports, a USB-C splitter (and optional USB-A/C converters) is needed. Power the leader and follower arms, **taking care not to mix up the power cords as voltages differ.** Use a camera tool (e.g., Cheese on DGX Spark) to check live feeds and finalize positioning.
|
||||||
|
|
||||||
|
### Calibrate the Robot
|
||||||
|
|
||||||
|
First, identify the device IDs for the two robot arms and the two cameras.
|
||||||
|
|
||||||
|
Open a new terminal on DGX Spark. Activate the `so_arm_starter` conda environment:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda activate so_arm_starter
|
||||||
|
```
|
||||||
|
|
||||||
|
Execute the following command and follow the on-screen instructions to identify the device IDs of the leader arm and the follower arm:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m lerobot.find_port
|
||||||
|
```
|
||||||
|
|
||||||
|
On a Linux-based system, the device IDs are usually `/dev/ttyACM0` and `/dev/ttyACM1`.
|
||||||
|
|
||||||
|
Execute the following command to identify the wrist and room camera indices:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m lerobot.find_cameras
|
||||||
|
```
|
||||||
|
|
||||||
|
The console should list 2 cameras with their indices (e.g., `/dev/video0` and `/dev/video2`). This command also captures and saves the current camera frames as distinct PNG images in `outputs/captured_images/`, using camera indices in the filename for easy identification and verification of feeds.
|
||||||
|
|
||||||
|
Set access permissions for the robot arms before calibration by running:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
sudo chmod 666 /dev/ttyACM0
|
||||||
|
sudo chmod 666 /dev/ttyACM1
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust device IDs as needed. **Execute these commands every time the robot disconnects from and reconnects to DGX Spark.**
|
||||||
|
|
||||||
|
Run the following commands in the terminal to calibrate the leader arm and the follower arm:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
## Leader arm:
|
||||||
|
python -m lerobot.calibrate --teleop.type=so101_leader --teleop.port=/dev/ttyACM0 --teleop.id=so101_leader
|
||||||
|
|
||||||
|
## Follower arm:
|
||||||
|
python -m lerobot.calibrate --robot.type=so101_follower --robot.port=/dev/ttyACM1 --robot.id=so101_follower
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust device IDs and customize `--teleop.id` and `--robot.id` to set different device names if needed. Then, follow on-screen instructions and refer to the [video here](https://huggingface.co/docs/lerobot/so101#calibration-video) for proper calibration.
|
||||||
|
|
||||||
|
> [!WARNING]
|
||||||
|
> Maintain *one* single follower arm calibration for this tutorial. Re-calibrating after collecting data or training the GR00T model risks needing to restart everything, as subsequent steps rely on the initial calibration.
|
||||||
|
|
||||||
|
### Test Teleoperation
|
||||||
|
|
||||||
|
To complete the preparation, teleoperate the follower arm using the leader arm.
|
||||||
|
|
||||||
|
Run the following command to teleoperate without camera feeds:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m lerobot.teleoperate \
|
||||||
|
--robot.type=so101_follower \
|
||||||
|
--robot.port=/dev/ttyACM1 \
|
||||||
|
--robot.id=so101_follower \
|
||||||
|
--teleop.type=so101_leader \
|
||||||
|
--teleop.port=/dev/ttyACM0 \
|
||||||
|
--teleop.id=so101_leader
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust the `--robot.port`, `--teleop.port`, `--robot.id` and `--teleop.id` arguments if needed.
|
||||||
|
|
||||||
|
Run the following command to teleoperate with camera feeds:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m lerobot.teleoperate \
|
||||||
|
--robot.type=so101_follower \
|
||||||
|
--robot.port=/dev/ttyACM1 \
|
||||||
|
--robot.id=so101_follower \
|
||||||
|
--robot.cameras="{wrist: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}, room: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
|
||||||
|
--teleop.type=so101_leader \
|
||||||
|
--teleop.port=/dev/ttyACM0 \
|
||||||
|
--teleop.id=so101_leader \
|
||||||
|
--display_data=true
|
||||||
|
```
|
||||||
|
|
||||||
|
Adjust device IDs, names and camera indices if needed.
|
||||||
|
|
||||||
|
During teleoperation with camera feeds, the [Rerun viewer](https://rerun.io/) UI appears, showing real-time views from both cameras and the robot's motor action data.
|
||||||
|
|
||||||
|
## Part 2: Synthetic Data Generation
|
||||||
|
|
||||||
|
## Step 1. Launch Isaac Sim for Data Collection
|
||||||
|
|
||||||
|
Ensure the leader arm is powered on and connected to DGX Spark. Open a new terminal on DGX Spark, activate the `so_arm_starter` conda environment and set the `PYTHONPATH`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda activate so_arm_starter
|
||||||
|
export PYTHONPATH=<path-to-i4h-workflows>/workflows/so_arm_starter/scripts
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, run the following command in the terminal:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m simulation.environments.teleoperation_record \
|
||||||
|
--port=/dev/ttyACM0 \
|
||||||
|
--enable_cameras \
|
||||||
|
--record \
|
||||||
|
--dataset_path=./data-collection-sim/dataset.hdf5
|
||||||
|
```
|
||||||
|
|
||||||
|
If needed, adjust the leader arm device ID and modify the `--dataset_path` argument to save data elsewhere.
|
||||||
|
|
||||||
|
The command launches [Isaac Sim](https://developer.nvidia.com/isaac/sim), loading a scene with a follower arm, table, surgical scissors, and a tray. The initial load may take about 2 minutes; if Isaac Sim seems unresponsive, do not force quit—wait for it to load fully.
|
||||||
|
|
||||||
|
To change the simulated follower arm's color to match your physical robot, go to the `Stage` panel (right side of Isaac Sim) → `World` → `envs` → `env_0` → `robot` → `Looks` → `material_a_3d_printed`, then under the `Property` tab, adjust the `Albedo Color`.
|
||||||
|
|
||||||
|
The first command run requires leader arm calibration, even if previously done, due to a different program-specific calibration file. Your existing calibration remains unchanged.
|
||||||
|
|
||||||
|
## Step 2. Collect Synthetic Pick-and-Place Demonstrations
|
||||||
|
|
||||||
|
To teleoperate the robot in Isaac Sim and collect synthetic pick-and-place demonstrations:
|
||||||
|
|
||||||
|
* Press "B" to begin teleoperation; the robot moves to the initial position.
|
||||||
|
* Use the physical leader arm to control the virtual follower arm for the pick-and-place task.
|
||||||
|
* Press "N" to save a successful episode.
|
||||||
|
* Press "R" to restart without saving.
|
||||||
|
* Scissors position and angle are slightly randomized per new episode.
|
||||||
|
* Press Ctrl + C to quit.
|
||||||
|
|
||||||
|
Use these shortcuts for Isaac Sim viewport navigation:
|
||||||
|
|
||||||
|
* "F" key after clicking the robot to auto-focus.
|
||||||
|
* Middle mouse wheel to zoom.
|
||||||
|
* "ALT" + left mouse drag to change the view angle.
|
||||||
|
* Middle mouse wheel click + drag to move in the viewport.
|
||||||
|
|
||||||
|
Collecting around 70 synthetic episodes is sufficient for this tutorial.
|
||||||
|
|
||||||
|
## Step 3. Convert Data to LeRobot Format
|
||||||
|
|
||||||
|
After collecting the synthetic data, convert them to the Hugging Face [LeRobot](https://github.com/huggingface/lerobot) dataset format for fine-tuning the Isaac GR00T model:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m training.hdf5_to_lerobot \
|
||||||
|
--repo_id=spark/scrub-nurse-sim \
|
||||||
|
--hdf5_path=./data-collection-sim/dataset.hdf5 \
|
||||||
|
--task_description="Grip the scissors and put them into the tray."
|
||||||
|
```
|
||||||
|
|
||||||
|
Modify `--repo_id` and `--task_description` as needed, but ensure a meaningful task description. The resulting dataset, containing motor actions, wrist camera, and room camera recordings, is stored under `/home/$USER/.cache/huggingface/lerobot/<repo_id>`.
|
||||||
|
|
||||||
|
## Part 3: Real-World Data Collection
|
||||||
|
|
||||||
|
## Step 1. Set Up for Real-World Data Collection
|
||||||
|
|
||||||
|
Ensure the leader arm, follower arm, wrist camera, and room camera are connected to DGX Spark. On DGX Spark, open a new terminal, activate the `so_arm_starter` conda environment:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda activate so_arm_starter
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 2. Collect Real-World Data Episodes
|
||||||
|
|
||||||
|
Run the following command to collect real-world data episodes as LeRobot dataset:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m lerobot.record \
|
||||||
|
--robot.type=so101_follower \
|
||||||
|
--robot.port=/dev/ttyACM1 \
|
||||||
|
--robot.cameras="{wrist: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}, room: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
|
||||||
|
--robot.id=so101_follower \
|
||||||
|
--teleop.type=so101_leader \
|
||||||
|
--teleop.port=/dev/ttyACM0 \
|
||||||
|
--teleop.id=so101_leader \
|
||||||
|
--display_data=true \
|
||||||
|
--dataset.repo_id="spark/scrub-nurse-real" \
|
||||||
|
--dataset.num_episodes=20 \
|
||||||
|
--dataset.single_task="Grip the scissors and put them into the tray." \
|
||||||
|
--dataset.push_to_hub=false
|
||||||
|
```
|
||||||
|
|
||||||
|
Modify robot device IDs, names and camera indices to match yours. Ensure `--dataset.single_task` matches the task description for synthetic data collection. You can change `--dataset.repo_id` to alter the LeRobot dataset name. The dataset will be saved under `/home/$USER/.cache/huggingface/lerobot/<repo_id>`.
|
||||||
|
|
||||||
|
The command initiates the Rerun viewer and teleoperation for both arms. Follow these steps for pick-and-place demonstration recording:
|
||||||
|
|
||||||
|
* The recording starts immediately upon command execution for the current episode; be prepared or you'll need to re-record.
|
||||||
|
* Each episode's recording has three sequential states:
|
||||||
|
1. **Demonstration recording** (60s) — Record the task.
|
||||||
|
2. **Scene Reset** (60s) — Perform randomization, robot/object resets. Rerun displays signals, but no recording occurs.
|
||||||
|
3. **Data Saving** (approx. 5s) — Saves recording to a LeRobot dataset. Rerun temporarily freezes; no recording occurs.
|
||||||
|
* Right Arrow (→) — skips to the next state. Cannot skip State 3 (saving stage); pressing it then could corrupt the episode.
|
||||||
|
* Left Arrow (←) (during State 1) — cancels the current recording, giving 60 seconds to reset the scene before recording restarts. Use this if you mess up.
|
||||||
|
* **ESC** — stops recording and saves all currently recorded content. Use after a completed successful episode to avoid including unwanted "garbage" data.
|
||||||
|
* Collecting multiple small, separate LeRobot datasets might be easier, and they can be combined for GR00T training later.
|
||||||
|
|
||||||
|
## Step 3. Prepare Datasets for Training
|
||||||
|
|
||||||
|
After creating the datasets, copy the `modality.json` file generated during synthetic data creation (e.g., `/home/$USER/.cache/huggingface/lerobot/spark/scrub-nurse-sim/meta/modality.json`) to each dataset's `meta` folder. This file is essential for GR00T model training.
|
||||||
|
|
||||||
|
Collecting 20 real-world episodes should be sufficient for this tutorial.
|
||||||
|
|
||||||
|
## Part 4: GR00T N1.5 Fine-Tuning
|
||||||
|
|
||||||
|
## Step 1. Launch Docker Container
|
||||||
|
|
||||||
|
Run the following command on DGX Spark to start a docker container:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
docker run -it --gpus all --privileged --rm \
|
||||||
|
--ipc=host \
|
||||||
|
--network=host \
|
||||||
|
--ulimit memlock=-1 \
|
||||||
|
--ulimit stack=67108864 \
|
||||||
|
--entrypoint=bash \
|
||||||
|
-e "NVIDIA_VISIBLE_DEVICES=all" \
|
||||||
|
-e "PYTHONPATH=<path-to-i4h-workflows>/workflows/so_arm_starter/scripts"\
|
||||||
|
-v /dev:/dev \
|
||||||
|
-v /home/"$USER"/.cache/huggingface/lerobot:/root/.cache/huggingface/lerobot \
|
||||||
|
-v $(pwd):/workspace \
|
||||||
|
-w /workspace \
|
||||||
|
soarm-dgx
|
||||||
|
```
|
||||||
|
|
||||||
|
We mount `/home/"$USER"/.cache/huggingface/lerobot` to the container so previous calibration files and datasets are accessible.
|
||||||
|
|
||||||
|
## Step 2. Download Pretrained Model
|
||||||
|
|
||||||
|
Download our pretrained GR00T N1.5 model [here](https://github.com/isaac-for-healthcare/i4h-workflows/blob/main/workflows/so_arm_starter/README.md#-running-workflows). The model was trained on 70 simulated and 5 real episodes. This model will likely require fine-tuning due to variations in your robot hardware, calibration, and task setup.
|
||||||
|
|
||||||
|
## Step 3. Run GR00T N1.5 Fine-Tuning
|
||||||
|
|
||||||
|
Run the following command to run GR00T N1.5 fine-tuning:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
PYTHONWARNINGS="ignore::UserWarning" python -m training.gr00t_n1_5.train \
|
||||||
|
--dataset_path <dataset-1> <dataset-2> ... \
|
||||||
|
--output_dir /workspace/training-output/ \
|
||||||
|
--data_config so100_dualcam \
|
||||||
|
--base-model-path <pretrained-gr00t-model> \
|
||||||
|
--max-steps 30000 \
|
||||||
|
--save-steps 2000
|
||||||
|
```
|
||||||
|
|
||||||
|
Change `--base-model-path` to the pretrained model path. Experiment with `--max-steps` and `--save-steps`; we found 30,000 steps typically sufficient for convergence. On DGX Spark, 30,000 steps should take around 24 hours.
|
||||||
|
|
||||||
|
You can use Tensorboard to monitor the training progress.
|
||||||
|
|
||||||
|
## Part 5: Deploying Trained Robotic Policy
|
||||||
|
|
||||||
|
## Step 1. Convert Model to TensorRT Format
|
||||||
|
|
||||||
|
To get the optimal inference performance, let's convert the fine-tuned GR00T N1.5 model to [TensorRT](https://developer.nvidia.com/tensorrt) format.
|
||||||
|
|
||||||
|
Open a terminal window and create the same docker container as in Part 4. Then, run the following commands:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m policy_runner.gr00tn1_5.trt.export_onnx --ckpt_path <fine-tuned-gr00t-model-path>
|
||||||
|
bash <path-to-i4h-workflows>/workflows/so_arm_starter/scripts/policy_runner/gr00tn1_5/trt/build_engine.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This generates a `gr00t_engine` folder that contains the converted TensorRT model. Avoid running heavy compute or graphics tasks on DGX Spark during conversion.
|
||||||
|
|
||||||
|
## Step 2. Deploy in Isaac Sim
|
||||||
|
|
||||||
|
To deploy the trained policy model in Isaac Sim, an [RTI DDS](https://www.rti.com/products/dds-standard) license file is required for communication of different modules. Get a professional or evaluation license from [here](https://www.rti.com/get-connext).
|
||||||
|
|
||||||
|
Open a new terminal window and create the same docker container as in Part 4. First, set the `RTI_LICENSE_FILE` environment variable:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export RTI_LICENSE_FILE=<path-to-rti-license-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, run the following command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m policy_runner.run_policy \
|
||||||
|
--ckpt_path=<fine-tuned-gr00t-model-path> \
|
||||||
|
--task_description="Grip the scissors and put them into the tray." \
|
||||||
|
--trt \
|
||||||
|
--trt_engine_path=<fine-tuned-gr00t-tensorrt-model>
|
||||||
|
```
|
||||||
|
|
||||||
|
This loads the GR00T model for inference in the background.
|
||||||
|
|
||||||
|
Open another terminal window. Activate the `so_arm_starter` conda environment and set `PYTHONPATH` and `RTI_LICENSE_FILE`:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
conda activate so_arm_starter
|
||||||
|
export PYTHONPATH=<path-to-i4h-workflows>/workflows/so_arm_starter/scripts
|
||||||
|
export RTI_LICENSE_FILE=<path-to-rti-license-file>
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, run the following command in the terminal:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m simulation.environments.sim_with_dds --enable_cameras
|
||||||
|
```
|
||||||
|
|
||||||
|
Isaac Sim will open up and load the pick-and-place scene, then the simulated robot will execute the task autonomously, driven by the GR00T N1.5 policy model.
|
||||||
|
|
||||||
|
## Step 3. Deploy in Real World
|
||||||
|
|
||||||
|
Ensure the follower arm, wrist camera, and room camera are connected to DGX Spark.
|
||||||
|
|
||||||
|
Launch the same docker container as in Part 4. Find and modify the configuration file under `<path-to-i4h-workflows>/workflows/so_arm_starter/scripts/holoscan_apps/soarm_robot_config.yaml` to update the follower arm's device ID, name, camera indices, and the fine-tuned GR00T model path. Then, run the following command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python -m holoscan_apps.gr00t_inference_app \
|
||||||
|
--config <path-to-i4h-workflows>/workflows/so_arm_starter/scripts/holoscan_apps/soarm_robot_config.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
This command launches an efficient GR00T N1.5 inference application using [NVIDIA Holoscan SDK](https://github.com/nvidia-holoscan/holoscan-sdk). The follower arm will execute the task autonomously shortly after.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
This tutorial demonstrated the end-to-end workflow of developing and deploying an autonomous healthcare robot on a single **NVIDIA DGX Spark**. Leveraging **NVIDIA Isaac For Healthcare**, we consolidated the 3-computers workflow of synthetic data generation, GR00T N1.5 training, and robotic policy deployment onto one powerful hardware platform. This workflow highlights the efficiency of the DGX Spark for accelerating the physical AI development pipeline, making the creation and deployment of intelligent healthcare robots more streamlined and accessible.
|
||||||
@ -1,6 +1,6 @@
|
|||||||
# Run models with llama.cpp on DGX Spark
|
# Run models with llama.cpp on DGX Spark
|
||||||
|
|
||||||
> Build llama.cpp with CUDA and serve models via an OpenAI-compatible API (Gemma 4 31B IT as example)
|
> Build llama.cpp with CUDA and serve models via an OpenAI-compatible API (Qwen3.6 as example)
|
||||||
|
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
@ -17,15 +17,15 @@
|
|||||||
|
|
||||||
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a lightweight C/C++ inference stack for large language models. You build it with CUDA so tensor work runs on the DGX Spark GB10 GPU, then load GGUF weights and expose chat through `llama-server`’s OpenAI-compatible HTTP API.
|
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a lightweight C/C++ inference stack for large language models. You build it with CUDA so tensor work runs on the DGX Spark GB10 GPU, then load GGUF weights and expose chat through `llama-server`’s OpenAI-compatible HTTP API.
|
||||||
|
|
||||||
This playbook walks through that stack end to end. As the model example, it uses **Gemma 4 31B IT** - a frontier reasoning model built by Google DeepMind that llama.cpp supports, with strengths in coding, agentic workflows, and fine-tuning. The instructions download its **F16** GGUF from Hugging Face. The same build and server steps apply to other GGUFs (including other sizes in the support matrix below).
|
This playbook walks through that stack end to end using **Qwen3.6** as the hands-on example: a current-generation family that runs well from quantized GGUF on Spark. Checkpoint choices and paths for all supported models are summarized in the matrix below; commands are in the instructions.
|
||||||
|
|
||||||
## What you'll accomplish
|
## What you'll accomplish
|
||||||
|
|
||||||
You will build llama.cpp with CUDA for GB10, download a Gemma 4 31B IT model checkpoint, and run **`llama-server`** with GPU offload. You get:
|
You will build llama.cpp with CUDA for GB10, download a **Qwen3.6** example checkpoint, and run **`llama-server`** with GPU offload. You get:
|
||||||
|
|
||||||
- Local inference through llama.cpp (no separate Python inference framework required)
|
- Local inference through llama.cpp (no separate Python inference framework required)
|
||||||
- An OpenAI-compatible `/v1/chat/completions` endpoint for tools and apps
|
- An OpenAI-compatible `/v1/chat/completions` endpoint for tools and apps
|
||||||
- A concrete validation that **Gemma 4 31B IT** runs on this stack on DGX Spark
|
- A concrete validation that the **Qwen3.6** example runs on this stack on DGX Spark
|
||||||
|
|
||||||
## What to know before starting
|
## What to know before starting
|
||||||
|
|
||||||
@ -39,8 +39,8 @@ You will build llama.cpp with CUDA for GB10, download a Gemma 4 31B IT model che
|
|||||||
**Hardware requirements**
|
**Hardware requirements**
|
||||||
|
|
||||||
- NVIDIA DGX Spark with GB10 GPU
|
- NVIDIA DGX Spark with GB10 GPU
|
||||||
- Sufficient unified memory for the F16 checkpoint (on the order of **~62GB** for weights alone; more when KV cache and runtime overhead are included)
|
- Sufficient unified memory for the example **UD-Q4_K_M** MoE checkpoint (weights on the order of **~20GB**, plus KV cache and runtime overhead—scale up if you pick a larger quant or longer context)
|
||||||
- At least **~70GB** free disk for the F16 download plus build artifacts (use a smaller quant from the same repo if you need less disk and VRAM)
|
- At least **~30GB** free disk for the example download plus build artifacts (more if you keep multiple GGUFs)
|
||||||
|
|
||||||
**Software requirements**
|
**Software requirements**
|
||||||
|
|
||||||
@ -50,12 +50,14 @@ You will build llama.cpp with CUDA for GB10, download a Gemma 4 31B IT model che
|
|||||||
- CUDA Toolkit: `nvcc --version`
|
- CUDA Toolkit: `nvcc --version`
|
||||||
- Network access to GitHub and Hugging Face
|
- Network access to GitHub and Hugging Face
|
||||||
|
|
||||||
## Model Support Matrix
|
## Model support matrix
|
||||||
|
|
||||||
The following models are supported with llama.cpp on Spark. All listed models are available and ready to use:
|
The following models are supported with llama.cpp on Spark. The instructions use the **Qwen3.6** example row by default.
|
||||||
|
|
||||||
| Model | Support Status | HF Handle |
|
| Model | Support Status | HF Handle |
|
||||||
|-------|----------------|-----------|
|
|-------|----------------|-----------|
|
||||||
|
| **Qwen3.6-35B-A3B** (example walkthrough) | ✅ | `unsloth/Qwen3.6-35B-A3B-GGUF/Qwen3.6-35B-A3B-UD-Q4_K_M.gguf` |
|
||||||
|
| **Qwen3.6-27B** | ✅ | `unsloth/Qwen3.6-27B-GGUF/Qwen3.6-27B-Q4_K_M.gguf` |
|
||||||
| **Gemma 4 31B IT** | ✅ | `ggml-org/gemma-4-31B-it-GGUF` |
|
| **Gemma 4 31B IT** | ✅ | `ggml-org/gemma-4-31B-it-GGUF` |
|
||||||
| **Gemma 4 26B A4B IT** | ✅ | `ggml-org/gemma-4-26B-A4B-it-GGUF` |
|
| **Gemma 4 26B A4B IT** | ✅ | `ggml-org/gemma-4-26B-A4B-it-GGUF` |
|
||||||
| **Gemma 4 E4B IT** | ✅ | `ggml-org/gemma-4-E4B-it-GGUF` |
|
| **Gemma 4 E4B IT** | ✅ | `ggml-org/gemma-4-E4B-it-GGUF` |
|
||||||
@ -64,17 +66,17 @@ The following models are supported with llama.cpp on Spark. All listed models ar
|
|||||||
|
|
||||||
## Time & risk
|
## Time & risk
|
||||||
|
|
||||||
* **Estimated time:** About 30 minutes, plus downloading the ~62GB example
|
* **Estimated time:** About 30 minutes, plus downloading the example GGUF (~20GB order of magnitude for the default quant)
|
||||||
* **Risk level:** Low — build is local to your clone; no system-wide installs required for the steps below
|
* **Risk level:** Low — build is local to your clone; no system-wide installs required for the steps below
|
||||||
* **Rollback:** Remove the `llama.cpp` clone and the model directory under `~/models/` to reclaim disk space
|
* **Rollback:** Remove the `llama.cpp` clone and the model directory under `~/models/` to reclaim disk space
|
||||||
* **Last updated:** 04/02/2026
|
* **Last updated:** 04/27/2026
|
||||||
* First Publication
|
* We now walk you through Qwen3.6 first; other models remain in the list
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
## Step 1. Verify prerequisites
|
## Step 1. Verify prerequisites
|
||||||
|
|
||||||
This walkthrough uses **Gemma 4 31B IT** (`gemma-4-31B-it-f16.gguf`) as the example checkpoint. You can substitute another GGUF from [`ggml-org/gemma-4-31B-it-GGUF`](https://huggingface.co/ggml-org/gemma-4-31B-it-GGUF) (for example `Q4_K_M` or `Q8_0`) by changing the `hf download` filename and `--model` path in later steps.
|
The **example** checkpoint is **`Qwen3.6-35B-A3B-UD-Q4_K_M.gguf`** from Hugging Face repo **`unsloth/Qwen3.6-35B-A3B-GGUF`** (full handle: `unsloth/Qwen3.6-35B-A3B-GGUF/Qwen3.6-35B-A3B-UD-Q4_K_M.gguf`). The other supported file is **`Qwen3.6-27B-Q4_K_M.gguf`** from **`unsloth/Qwen3.6-27B-GGUF`**—use the same build and server steps, changing `hf download` and `--model` paths (see the [overview model matrix](overview.md)).
|
||||||
|
|
||||||
Ensure the required tools are installed:
|
Ensure the required tools are installed:
|
||||||
|
|
||||||
@ -121,25 +123,25 @@ make -j8
|
|||||||
|
|
||||||
The build usually takes on the order of 5–10 minutes. When it finishes, binaries such as `llama-server` appear under `build/bin/`.
|
The build usually takes on the order of 5–10 minutes. When it finishes, binaries such as `llama-server` appear under `build/bin/`.
|
||||||
|
|
||||||
## Step 4. Download Gemma 4 31B IT GGUF (supported model example)
|
## Step 4. Download example Qwen3.6-35B-A3B GGUF
|
||||||
|
|
||||||
llama.cpp loads models in **GGUF** format. **gemma-4-31B-it** is available in GGUF from Hugging Face; this playbook uses a F16 variant that balances quality and memory on GB10-class hardware.
|
llama.cpp loads models in **GGUF** format. This playbook uses the **UD-Q4_K_M** quantized MoE checkpoint from Unsloth, which fits comfortably on DGX Spark GB10 unified memory while keeping strong quality.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
hf download ggml-org/gemma-4-31B-it-GGUF \
|
hf download unsloth/Qwen3.6-35B-A3B-GGUF \
|
||||||
gemma-4-31B-it-f16.gguf \
|
Qwen3.6-35B-A3B-UD-Q4_K_M.gguf \
|
||||||
--local-dir ~/models/gemma-4-31B-it-GGUF
|
--local-dir ~/models/Qwen3.6-35B-A3B-GGUF
|
||||||
```
|
```
|
||||||
|
|
||||||
The F16 file is large (**~62GB**). The download can be resumed if interrupted.
|
The file is on the order of **~20GB** (exact size may vary). The download can be resumed if interrupted.
|
||||||
|
|
||||||
## Step 5. Start llama-server with Gemma 4 31B IT
|
## Step 5. Start llama-server with Qwen3.6-35B-A3B
|
||||||
|
|
||||||
From your `llama.cpp/build` directory, launch the OpenAI-compatible server with GPU offload:
|
From your `llama.cpp/build` directory, launch the OpenAI-compatible server with GPU offload:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./bin/llama-server \
|
./bin/llama-server \
|
||||||
--model ~/models/gemma-4-31B-it-GGUF/gemma-4-31B-it-f16.gguf \
|
--model ~/models/Qwen3.6-35B-A3B-GGUF/Qwen3.6-35B-A3B-UD-Q4_K_M.gguf \
|
||||||
--host 0.0.0.0 \
|
--host 0.0.0.0 \
|
||||||
--port 30000 \
|
--port 30000 \
|
||||||
--n-gpu-layers 99 \
|
--n-gpu-layers 99 \
|
||||||
@ -162,7 +164,7 @@ llama_new_context_with_model: n_ctx = 8192
|
|||||||
main: server is listening on 0.0.0.0:30000
|
main: server is listening on 0.0.0.0:30000
|
||||||
```
|
```
|
||||||
|
|
||||||
**Keep this terminal open** while testing. Large GGUFs can take several minutes to load; until you see `server is listening`, nothing accepts connections on port 30000 (see Troubleshooting if `curl` reports connection refused).
|
**Keep this terminal open** while testing. Large GGUFs can take a minute or more to load; until you see `server is listening`, nothing accepts connections on port 30000 (see Troubleshooting if `curl` reports connection refused).
|
||||||
|
|
||||||
## Step 6. Test the API
|
## Step 6. Test the API
|
||||||
|
|
||||||
@ -195,7 +197,7 @@ Example shape of the response (fields vary by llama.cpp version; `message` may i
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"created": 1765916539,
|
"created": 1765916539,
|
||||||
"model": "gemma-4-31B-it-f16.gguf",
|
"model": "Qwen3.6-35B-A3B-UD-Q4_K_M.gguf",
|
||||||
"object": "chat.completion",
|
"object": "chat.completion",
|
||||||
"usage": {
|
"usage": {
|
||||||
"completion_tokens": 100,
|
"completion_tokens": 100,
|
||||||
@ -209,15 +211,15 @@ Example shape of the response (fields vary by llama.cpp version; `message` may i
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Step 7. Longer completion (with example model)
|
## Step 7. Longer completion (with Qwen3.6)
|
||||||
|
|
||||||
Try a slightly longer prompt to confirm stable generation with **Gemma 4 31B IT**:
|
Try a slightly longer prompt to confirm stable generation with **Qwen3.6-35B-A3B**:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -X POST http://127.0.0.1:30000/v1/chat/completions \
|
curl -X POST http://127.0.0.1:30000/v1/chat/completions \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{
|
-d '{
|
||||||
"model": "gemma4",
|
"model": "qwen3",
|
||||||
"messages": [{"role": "user", "content": "Solve this step by step: If a train travels 120 miles in 2 hours, what is its average speed?"}],
|
"messages": [{"role": "user", "content": "Solve this step by step: If a train travels 120 miles in 2 hours, what is its average speed?"}],
|
||||||
"max_tokens": 500
|
"max_tokens": 500
|
||||||
}'
|
}'
|
||||||
@ -231,7 +233,7 @@ To remove this tutorial’s artifacts:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
rm -rf ~/llama.cpp
|
rm -rf ~/llama.cpp
|
||||||
rm -rf ~/models/gemma-4-31B-it-GGUF
|
rm -rf ~/models/Qwen3.6-35B-A3B-GGUF
|
||||||
```
|
```
|
||||||
|
|
||||||
Deactivate the Python venv if you no longer need `hf`:
|
Deactivate the Python venv if you no longer need `hf`:
|
||||||
|
|||||||
@ -54,6 +54,9 @@ You'll deploy LM Studio on an NVIDIA DGX Spark device to run gpt-oss 120B, and u
|
|||||||
- Laptop and DGX Spark must be on the same local network
|
- Laptop and DGX Spark must be on the same local network
|
||||||
- Network access to download packages and models
|
- Network access to download packages and models
|
||||||
|
|
||||||
|
## Model support matrix
|
||||||
|
To explore supported models in LM Studio, check out [LM Studio model catalog](https://lmstudio.ai/models) page.
|
||||||
|
|
||||||
## LM Link (optional)
|
## LM Link (optional)
|
||||||
|
|
||||||
[LM Link](https://lmstudio.ai/link) lets you **use your local models remotely**. You link machines (e.g. your DGX Spark and your laptop), then load models on the Spark and use them from the laptop as if they were local.
|
[LM Link](https://lmstudio.ai/link) lets you **use your local models remotely**. You link machines (e.g. your DGX Spark and your laptop), then load models on the Spark and use them from the laptop as if they were local.
|
||||||
@ -80,8 +83,8 @@ All required assets can be found below. These sample scripts can be used in Step
|
|||||||
* **Rollback:**
|
* **Rollback:**
|
||||||
* Downloaded models can be removed manually from the models directory.
|
* Downloaded models can be removed manually from the models directory.
|
||||||
* Uninstall LM Studio or llmster
|
* Uninstall LM Studio or llmster
|
||||||
* **Last Updated:** 03/12/2026
|
* **Last Updated:** 04/27/2026
|
||||||
* Add instructions for LM Link features
|
* Introduce Qwen3.6 35B as example
|
||||||
|
|
||||||
## Instructions
|
## Instructions
|
||||||
|
|
||||||
@ -153,7 +156,7 @@ LM Link is in **Preview** and is free for up to 2 users, 5 devices each. For det
|
|||||||
As an example, let's download and run gpt-oss 120B, one of the best open source models from OpenAI. This model is too large for many laptops due to memory limitations, which makes this a fantastic use case for the Spark.
|
As an example, let's download and run gpt-oss 120B, one of the best open source models from OpenAI. This model is too large for many laptops due to memory limitations, which makes this a fantastic use case for the Spark.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lms get openai/gpt-oss-120b
|
lms get qwen/qwen3.6-35b-a3b
|
||||||
```
|
```
|
||||||
|
|
||||||
This download will take a while due to its large size. Verify that the model has been successfully downloaded by listing your models:
|
This download will take a while due to its large size. Verify that the model has been successfully downloaded by listing your models:
|
||||||
@ -167,7 +170,7 @@ lms ls
|
|||||||
Load the model on your Spark so that it is ready to respond to requests from your laptop.
|
Load the model on your Spark so that it is ready to respond to requests from your laptop.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
lms load openai/gpt-oss-120b
|
lms load qwen/qwen3.6-35b-a3b
|
||||||
```
|
```
|
||||||
|
|
||||||
## Step 6. Set up a simple program that uses LM Studio SDK on the laptop
|
## Step 6. Set up a simple program that uses LM Studio SDK on the laptop
|
||||||
|
|||||||
@ -297,7 +297,7 @@ Expected: JSON listing `nemotron-3-super:120b`.
|
|||||||
Still inside the sandbox, send a test message:
|
Still inside the sandbox, send a test message:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
openclaw agent --agent main --local -m "hello" --session-id test
|
openclaw agent --agent main -m "hello" --session-id test
|
||||||
```
|
```
|
||||||
|
|
||||||
The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.
|
The agent will respond using Nemotron 3 Super. First responses may take 30--90 seconds for a 120B parameter model running locally.
|
||||||
@ -326,7 +326,7 @@ exit
|
|||||||
http://127.0.0.1:18789/#token=<long-token-here>
|
http://127.0.0.1:18789/#token=<long-token-here>
|
||||||
```
|
```
|
||||||
|
|
||||||
**If accessing the Web UI from a remote machine**, you need to set up port forwarding.
|
**If accessing the Web UI from a remote machine**, you need to set up an SSH tunnel. The NemoClaw onboard wizard already created the port 18789 forward on the Spark, so you only need to tunnel from your remote machine.
|
||||||
|
|
||||||
First, find your Spark's IP address. On the Spark, run:
|
First, find your Spark's IP address. On the Spark, run:
|
||||||
|
|
||||||
@ -336,13 +336,7 @@ hostname -I | awk '{print $1}'
|
|||||||
|
|
||||||
This prints the primary IP address (e.g. `192.168.1.42`). You can also find it in **Settings > Wi-Fi** or **Settings > Network** on the Spark's desktop, or check your router's connected-devices list.
|
This prints the primary IP address (e.g. `192.168.1.42`). You can also find it in **Settings > Wi-Fi** or **Settings > Network** on the Spark's desktop, or check your router's connected-devices list.
|
||||||
|
|
||||||
Start the port forward on the Spark host:
|
From your remote machine, create an SSH tunnel to the Spark (replace `<your-spark-ip>` with the IP address from above):
|
||||||
|
|
||||||
```bash
|
|
||||||
openshell forward start 18789 my-assistant --background
|
|
||||||
```
|
|
||||||
|
|
||||||
Then from your remote machine, create an SSH tunnel to the Spark (replace `<your-spark-ip>` with the IP address from above):
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>
|
ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>
|
||||||
@ -357,6 +351,13 @@ http://127.0.0.1:18789/#token=<long-token-here>
|
|||||||
> [!IMPORTANT]
|
> [!IMPORTANT]
|
||||||
> Use `127.0.0.1`, not `localhost` -- the gateway origin check requires an exact match.
|
> Use `127.0.0.1`, not `localhost` -- the gateway origin check requires an exact match.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If the Web UI fails to load and the port forward may be stale, reset it on the Spark host:
|
||||||
|
> ```bash
|
||||||
|
> openshell forward stop 18789 my-assistant || true
|
||||||
|
> openshell forward start 18789 my-assistant --background
|
||||||
|
> ```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Phase 3: Telegram Bot
|
## Phase 3: Telegram Bot
|
||||||
@ -372,15 +373,7 @@ Open Telegram, find [@BotFather](https://t.me/BotFather), send `/newbot`, and fo
|
|||||||
|
|
||||||
Make sure you are on the **host** (not inside the sandbox). If you are inside the sandbox, run `exit` first.
|
Make sure you are on the **host** (not inside the sandbox). If you are inside the sandbox, run `exit` first.
|
||||||
|
|
||||||
Set the required environment variables. Replace the placeholders with your actual values. `SANDBOX_NAME` must match the sandbox name you chose during the onboard wizard:
|
Add the Telegram network policy to the sandbox so it can reach the Telegram API:
|
||||||
|
|
||||||
```bash
|
|
||||||
export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
|
||||||
export SANDBOX_NAME=my-assistant
|
|
||||||
export NVIDIA_API_KEY=<your-nvidia-api-key>
|
|
||||||
```
|
|
||||||
|
|
||||||
Add the Telegram network policy to the sandbox:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
nemoclaw my-assistant policy-add
|
nemoclaw my-assistant policy-add
|
||||||
@ -388,31 +381,42 @@ nemoclaw my-assistant policy-add
|
|||||||
|
|
||||||
When prompted, select `telegram` and hit **Y** to confirm.
|
When prompted, select `telegram` and hit **Y** to confirm.
|
||||||
|
|
||||||
Start the Telegram bridge.
|
The Telegram bridge uses cloudflared to expose a public webhook URL. Install cloudflared on the Spark host (arm64):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -L --output cloudflared.deb \
|
||||||
|
https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb
|
||||||
|
sudo dpkg -i cloudflared.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
Set the bot token and start auxiliary services:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
||||||
nemoclaw start
|
nemoclaw start
|
||||||
```
|
```
|
||||||
|
|
||||||
The Telegram bridge starts only when the `TELEGRAM_BOT_TOKEN` environment variable is set. Verify the services are running:
|
The Telegram bridge starts only when the `TELEGRAM_BOT_TOKEN` environment variable is set. Verify the services are running and note the public URL:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
nemoclaw status
|
nemoclaw status
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You should see `● cloudflared` with a `trycloudflare.com` public URL (e.g. `https://assembled-peer-persian-kitty.trycloudflare.com`).
|
||||||
|
|
||||||
Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.
|
Open Telegram, find your bot, and send it a message. The bot forwards it to the agent and replies.
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If `nemoclaw start` prints `cloudflared not found — no public URL`, the cloudflared install above did not complete successfully. Re-run the install, then restart services:
|
||||||
|
> ```bash
|
||||||
|
> nemoclaw stop && nemoclaw start
|
||||||
|
> ```
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> The first response may take 30--90 seconds for a 120B parameter model running locally.
|
> The first response may take 30--90 seconds for a 120B parameter model running locally.
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> If the bridge does not appear in `nemoclaw status`, make sure `TELEGRAM_BOT_TOKEN` is exported in the same shell session where you run `nemoclaw start`. You can also try stopping and restarting:
|
> If the bridge does not appear in `nemoclaw status`, make sure `TELEGRAM_BOT_TOKEN` is exported in the same shell session where you run `nemoclaw start`.
|
||||||
> ```bash
|
|
||||||
> nemoclaw stop
|
|
||||||
> export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
|
||||||
> nemoclaw start
|
|
||||||
> ```
|
|
||||||
|
|
||||||
> [!NOTE]
|
> [!NOTE]
|
||||||
> For details on restricting which Telegram chats can interact with the agent, see the [NemoClaw Telegram bridge documentation](https://docs.nvidia.com/nemoclaw/latest/deployment/set-up-telegram-bridge.html).
|
> For details on restricting which Telegram chats can interact with the agent, see the [NemoClaw Telegram bridge documentation](https://docs.nvidia.com/nemoclaw/latest/deployment/set-up-telegram-bridge.html).
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user