Compare commits

..

1 Commits

Author SHA1 Message Date
TharunGaneshram
09fada6242
Merge e542e522c5 into cfbe0f9631 2026-04-02 15:48:02 +02:00
9 changed files with 23 additions and 328 deletions

View File

@ -31,7 +31,6 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
- [Install and Use Isaac Sim and Isaac Lab](nvidia/isaac/) - [Install and Use Isaac Sim and Isaac Lab](nvidia/isaac/)
- [Optimized JAX](nvidia/jax/) - [Optimized JAX](nvidia/jax/)
- [Live VLM WebUI](nvidia/live-vlm-webui/) - [Live VLM WebUI](nvidia/live-vlm-webui/)
- [Run models with llama.cpp on DGX Spark](nvidia/llama-cpp/)
- [LLaMA Factory](nvidia/llama-factory/) - [LLaMA Factory](nvidia/llama-factory/)
- [LM Studio on DGX Spark](nvidia/lm-studio/) - [LM Studio on DGX Spark](nvidia/lm-studio/)
- [Build and Deploy a Multi-Agent Chatbot](nvidia/multi-agent-chatbot/) - [Build and Deploy a Multi-Agent Chatbot](nvidia/multi-agent-chatbot/)

View File

@ -1,269 +0,0 @@
# Run models with llama.cpp on DGX Spark
> Build llama.cpp with CUDA and serve models via an OpenAI-compatible API (Gemma 4 31B IT as example)
## Table of Contents
- [Overview](#overview)
- [Instructions](#instructions)
- [Troubleshooting](#troubleshooting)
---
## Overview
## Basic idea
[llama.cpp](https://github.com/ggml-org/llama.cpp) is a lightweight C/C++ inference stack for large language models. You build it with CUDA so tensor work runs on the DGX Spark GB10 GPU, then load GGUF weights and expose chat through `llama-server`s OpenAI-compatible HTTP API.
This playbook walks through that stack end to end. As the model example, it uses **Gemma 4 31B IT** - a frontier reasoning model built by Google DeepMind that llama.cpp supports, with strengths in coding, agentic workflows, and fine-tuning. The instructions download its **F16** GGUF from Hugging Face. The same build and server steps apply to other GGUFs (including other sizes in the support matrix below).
## What you'll accomplish
You will build llama.cpp with CUDA for GB10, download a Gemma 4 31B IT model checkpoint, and run **`llama-server`** with GPU offload. You get:
- Local inference through llama.cpp (no separate Python inference framework required)
- An OpenAI-compatible `/v1/chat/completions` endpoint for tools and apps
- A concrete validation that **Gemma 4 31B IT** runs on this stack on DGX Spark
## What to know before starting
- Basic familiarity with Linux command line and terminal commands
- Understanding of git and building from source with CMake
- Basic knowledge of REST APIs and cURL for testing
- Familiarity with Hugging Face Hub for downloading GGUF files
## Prerequisites
**Hardware requirements**
- NVIDIA DGX Spark with GB10 GPU
- Sufficient unified memory for the F16 checkpoint (on the order of **~62GB** for weights alone; more when KV cache and runtime overhead are included)
- At least **~70GB** free disk for the F16 download plus build artifacts (use a smaller quant from the same repo if you need less disk and VRAM)
**Software requirements**
- NVIDIA DGX OS
- Git: `git --version`
- CMake (3.14+): `cmake --version`
- CUDA Toolkit: `nvcc --version`
- Network access to GitHub and Hugging Face
## Model Support Matrix
The following models are supported with llama.cpp on Spark. All listed models are available and ready to use:
| Model | Support Status | HF Handle |
|-------|----------------|-----------|
| **Gemma 4 31B IT** | ✅ | `ggml-org/gemma-4-31B-it-GGUF` |
| **Gemma 4 26B A4B IT** | ✅ | `ggml-org/gemma-4-26B-A4B-it-GGUF` |
| **Gemma 4 E4B IT** | ✅ | `ggml-org/gemma-4-E4B-it-GGUF` |
| **Gemma 4 E2B IT** | ✅ | `ggml-org/gemma-4-E2B-it-GGUF` |
| **Nemotron-3-Nano** | ✅ | `unsloth/Nemotron-3-Nano-30B-A3B-GGUF` |
## Time & risk
* **Estimated time:** About 30 minutes, plus downloading the ~62GB example
* **Risk level:** Low — build is local to your clone; no system-wide installs required for the steps below
* **Rollback:** Remove the `llama.cpp` clone and the model directory under `~/models/` to reclaim disk space
* **Last updated:** 04/02/2026
* First Publication
## Instructions
## Step 1. Verify prerequisites
This walkthrough uses **Gemma 4 31B IT** (`gemma-4-31B-it-f16.gguf`) as the example checkpoint. You can substitute another GGUF from [`ggml-org/gemma-4-31B-it-GGUF`](https://huggingface.co/ggml-org/gemma-4-31B-it-GGUF) (for example `Q4_K_M` or `Q8_0`) by changing the `hf download` filename and `--model` path in later steps.
Ensure the required tools are installed:
```bash
git --version
cmake --version
nvcc --version
```
All commands should return version information. If any are missing, install them before continuing.
Install the Hugging Face CLI:
```bash
python3 -m venv llama-cpp-venv
source llama-cpp-venv/bin/activate
pip install -U "huggingface_hub[cli]"
```
Verify installation:
```bash
hf version
```
## Step 2. Clone the llama.cpp repository
Clone upstream llama.cpp—the framework you are building:
```bash
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
```
## Step 3. Build llama.cpp with CUDA
Configure CMake with CUDA and GB10s **sm_121** architecture so GGMLs CUDA backend matches your GPU:
```bash
mkdir build && cd build
cmake .. -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES="121" -DLLAMA_CURL=OFF
make -j8
```
The build usually takes on the order of 510 minutes. When it finishes, binaries such as `llama-server` appear under `build/bin/`.
## Step 4. Download Gemma 4 31B IT GGUF (supported model example)
llama.cpp loads models in **GGUF** format. **gemma-4-31B-it** is available in GGUF from Hugging Face; this playbook uses a F16 variant that balances quality and memory on GB10-class hardware.
```bash
hf download ggml-org/gemma-4-31B-it-GGUF \
gemma-4-31B-it-f16.gguf \
--local-dir ~/models/gemma-4-31B-it-GGUF
```
The F16 file is large (**~62GB**). The download can be resumed if interrupted.
## Step 5. Start llama-server with Gemma 4 31B IT
From your `llama.cpp/build` directory, launch the OpenAI-compatible server with GPU offload:
```bash
./bin/llama-server \
--model ~/models/gemma-4-31B-it-GGUF/gemma-4-31B-it-f16.gguf \
--host 0.0.0.0 \
--port 30000 \
--n-gpu-layers 99 \
--ctx-size 8192 \
--threads 8
```
**Parameters (short):**
- `--host` / `--port`: bind address and port for the HTTP API
- `--n-gpu-layers 99`: offload layers to the GPU (adjust if you use a different model)
- `--ctx-size`: context length (can be increased up to model/server limits; uses more memory)
- `--threads`: CPU threads for non-GPU work
You should see log lines similar to:
```
llama_new_context_with_model: n_ctx = 8192
...
main: server is listening on 0.0.0.0:30000
```
**Keep this terminal open** while testing. Large GGUFs can take several minutes to load; until you see `server is listening`, nothing accepts connections on port 30000 (see Troubleshooting if `curl` reports connection refused).
## Step 6. Test the API
Use a **second terminal on the same machine** that runs `llama-server` (for example another SSH session into DGX Spark). If you run `curl` on your laptop while the server runs only on Spark, use the Spark hostname or IP instead of `localhost`.
```bash
curl -X POST http://127.0.0.1:30000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma4",
"messages": [{"role": "user", "content": "New York is a great city because..."}],
"max_tokens": 100
}'
```
If you see `curl: (7) Failed to connect`, the server is still loading, the process exited (check the server log for OOM or path errors), or you are not curling the host that runs `llama-server`.
Example shape of the response (fields vary by llama.cpp version; `message` may include extra keys):
```json
{
"choices": [
{
"finish_reason": "length",
"index": 0,
"message": {
"role": "assistant",
"content": "New York is a great city because it's a living, breathing collage of cultures, ideas, and possibilities—all stacked into one vibrant, neversleeping metropolis. Here are just a few reasons that many people ("
}
}
],
"created": 1765916539,
"model": "gemma-4-31B-it-f16.gguf",
"object": "chat.completion",
"usage": {
"completion_tokens": 100,
"prompt_tokens": 25,
"total_tokens": 125
},
"id": "chatcmpl-...",
"timings": {
...
}
}
```
## Step 7. Longer completion (with example model)
Try a slightly longer prompt to confirm stable generation with **Gemma 4 31B IT**:
```bash
curl -X POST http://127.0.0.1:30000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma4",
"messages": [{"role": "user", "content": "Solve this step by step: If a train travels 120 miles in 2 hours, what is its average speed?"}],
"max_tokens": 500
}'
```
## Step 8. Cleanup
Stop the server with `Ctrl+C` in the terminal where it is running.
To remove this tutorials artifacts:
```bash
rm -rf ~/llama.cpp
rm -rf ~/models/gemma-4-31B-it-GGUF
```
Deactivate the Python venv if you no longer need `hf`:
```bash
deactivate
```
## Step 9. Next steps
1. **Context length:** Increase `--ctx-size` for longer chats (watch memory; 1M-token class contexts are possible only when the build, model, and hardware allow).
2. **Other models:** Point `--model` at any compatible GGUF; the llama.cpp server API stays the same.
3. **Integrations:** Point Open WebUI, Continue.dev, or custom clients at `http://<spark-host>:30000/v1` using the OpenAI client pattern.
The server implements the usual OpenAI-style chat features your llama.cpp build enables (including streaming and tool-related flows where supported).
## Troubleshooting
| Symptom | Cause | Fix |
|---------|-------|-----|
| `cmake` fails with "CUDA not found" | CUDA toolkit not in PATH | Run `export PATH=/usr/local/cuda/bin:$PATH` and re-run CMake from a clean build directory |
| Build errors mentioning wrong GPU arch | CMake `CMAKE_CUDA_ARCHITECTURES` does not match GB10 | Use `-DCMAKE_CUDA_ARCHITECTURES="121"` for DGX Spark GB10 as in the instructions |
| GGUF download fails or stalls | Network or Hugging Face availability | Re-run `hf download`; it resumes partial files |
| "CUDA out of memory" when starting `llama-server` | Model too large for current context or VRAM | Lower `--ctx-size` (e.g. 4096) or use a smaller quantization from the same repo |
| Server runs but latency is high | Layers not on GPU | Confirm `--n-gpu-layers` is high enough for your model; check `nvidia-smi` during a request |
| `curl: (7) Failed to connect` on port 30000 | No listener yet, wrong host, or crash | Wait for `server is listening`; run `curl` on the same host as `llama-server` (or Sparks IP); run `ss -tln` and confirm `:30000`; read server stderr for OOM or bad `--model` path |
| Chat API errors or empty replies | Wrong `--model` path or incompatible GGUF | Verify the path to the `.gguf` file; update llama.cpp if the GGUF requires a newer format |
> [!NOTE]
> DGX Spark uses Unified Memory Architecture (UMA), which allows flexible sharing between GPU and CPU memory. Some software is still catching up to UMA behavior. If you hit memory pressure unexpectedly, you can try flushing the page cache (use with care on shared systems):
```bash
sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
```
For the latest platform issues, see the [DGX Spark known issues](https://docs.nvidia.com/dgx/dgx-spark/known-issues.html) documentation.

View File

@ -48,8 +48,8 @@ All necessary files for the playbook can be found [here on GitHub](https://githu
* **Duration:** 45-90 minutes for complete setup and initial model fine-tuning * **Duration:** 45-90 minutes for complete setup and initial model fine-tuning
* **Risks:** Model downloads can be large (several GB), ARM64 package compatibility issues may require troubleshooting, distributed training setup complexity increases with multi-node configurations * **Risks:** Model downloads can be large (several GB), ARM64 package compatibility issues may require troubleshooting, distributed training setup complexity increases with multi-node configurations
* **Rollback:** Virtual environments can be completely removed; no system-level changes are made to the host system beyond package installations. * **Rollback:** Virtual environments can be completely removed; no system-level changes are made to the host system beyond package installations.
* **Last Updated:** 03/04/2026 * **Last Updated:** 01/15/2026
* Recommend running Nemo finetune workflow via Docker * Fix qLoRA fine-tuning workflow
## Instructions ## Instructions

View File

@ -172,15 +172,12 @@ Verify the NVIDIA runtime works:
docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
``` ```
If you get a permission denied error on `docker`, add your user to the Docker group and activate the new group in your current session: If you get a permission denied error on `docker`, add your user to the Docker group and log out/in:
```bash ```bash
sudo usermod -aG docker $USER sudo usermod -aG docker $USER
newgrp docker
``` ```
This applies the group change immediately. Alternatively, you can log out and back in instead of running `newgrp docker`.
> [!NOTE] > [!NOTE]
> DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without `default-cgroupns-mode: host`, the gateway can fail with "Failed to start ContainerManager" errors. > DGX Spark uses cgroup v2. OpenShell's gateway embeds k3s inside Docker and needs host cgroup namespace access. Without `default-cgroupns-mode: host`, the gateway can fail with "Failed to start ContainerManager" errors.
@ -240,7 +237,7 @@ You should see `nemotron-3-super:120b` in the output.
This single command handles everything: installs Node.js (if needed), installs OpenShell, clones NemoClaw at the pinned stable release (`v0.0.1`), builds the CLI, and runs the onboard wizard to create a sandbox. This single command handles everything: installs Node.js (if needed), installs OpenShell, clones NemoClaw at the pinned stable release (`v0.0.1`), builds the CLI, and runs the onboard wizard to create a sandbox.
```bash ```bash
curl -fsSL https://www.nvidia.com/nemoclaw.sh | NEMOCLAW_INSTALL_TAG=v0.0.4 bash curl -fsSL https://www.nvidia.com/nemoclaw.sh | NEMOCLAW_INSTALL_TAG=v0.0.1 bash
``` ```
The onboard wizard walks you through setup: The onboard wizard walks you through setup:
@ -325,21 +322,13 @@ http://127.0.0.1:18789/#token=<long-token-here>
**If accessing the Web UI from a remote machine**, you need to set up port forwarding. **If accessing the Web UI from a remote machine**, you need to set up port forwarding.
First, find your Spark's IP address. On the Spark, run:
```bash
hostname -I | awk '{print $1}'
```
This prints the primary IP address (e.g. `192.168.1.42`). You can also find it in **Settings > Wi-Fi** or **Settings > Network** on the Spark's desktop, or check your router's connected-devices list.
Start the port forward on the Spark host: Start the port forward on the Spark host:
```bash ```bash
openshell forward start 18789 my-assistant --background openshell forward start 18789 my-assistant --background
``` ```
Then from your remote machine, create an SSH tunnel to the Spark (replace `<your-spark-ip>` with the IP address from above): Then from your remote machine, create an SSH tunnel to the Spark:
```bash ```bash
ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip> ssh -L 18789:127.0.0.1:18789 <your-user>@<your-spark-ip>

View File

@ -31,14 +31,12 @@ Spark & Reachy Photo Booth is an interactive and event-driven photo booth demo t
- **User position tracking** built with `facebookresearch/detectron2` and `FoundationVision/ByteTrack` - **User position tracking** built with `facebookresearch/detectron2` and `FoundationVision/ByteTrack`
- **MinIO** for storing captured/generated images as well as sharing them via QR-code - **MinIO** for storing captured/generated images as well as sharing them via QR-code
The demo is based on several services that communicate through a message bus. The demo is based on a several services that communicate through a message bus.
![Architecture diagram](assets/architecture-diagram.png) ![Architecture diagram](assets/architecture-diagram.png)
See also the walk-through video for this playbook: [Video](https://www.youtube.com/watch?v=6f1x8ReGLjc)
> [!NOTE] > [!NOTE]
> This playbook applies to Reachy Mini Lite. Reachy Mini (with on-board Raspberry Pi) might require minor adaptations. For simplicity, well refer to the robot as Reachy throughout this playbook. > This playbook applies to both the Reachy Mini and Reachy Mini Lite robots. For simplicity, well refer to the robot as Reachy throughout this playbook.
## What you'll accomplish ## What you'll accomplish
@ -59,7 +57,7 @@ You'll deploy a complete photo booth system on DGX Spark running multiple infere
> [!TIP] > [!TIP]
> Make sure your Reachy robot firmware is up to date. You can find instructions to update it [here](https://huggingface.co/spaces/pollen-robotics/Reachy_Mini). > Make sure your Reachy robot firmware is up to date. You can find instructions to update it [here](https://huggingface.co/spaces/pollen-robotics/Reachy_Mini).
**Software Requirements:** **Software Requirements:**
- The official [DGX Spark OS](https://docs.nvidia.com/dgx/dgx-spark/dgx-os.html) image including all required utilities such as Git, Docker, NVIDIA drivers, and the NVIDIA Container Toolkit - The official DGX Spark OS image including all required utilities such as Git, Docker, NVIDIA drivers, and the NVIDIA Container Toolkit
- An internet connection for the DGX Spark - An internet connection for the DGX Spark
- NVIDIA NGC Personal API Key (**`NVIDIA_API_KEY`**). [Create a key](https://org.ngc.nvidia.com/setup/api-keys) if necessary. Make sure to enable the `NGC Catalog` scope when creating the key. - NVIDIA NGC Personal API Key (**`NVIDIA_API_KEY`**). [Create a key](https://org.ngc.nvidia.com/setup/api-keys) if necessary. Make sure to enable the `NGC Catalog` scope when creating the key.
- Hugging Face access token (**`HF_TOKEN`**). [Create a token](https://huggingface.co/settings/tokens) if necessary. Make sure to create a token with _Read access to contents of all public gated repos you can access_ permission. - Hugging Face access token (**`HF_TOKEN`**). [Create a token](https://huggingface.co/settings/tokens) if necessary. Make sure to create a token with _Read access to contents of all public gated repos you can access_ permission.
@ -79,9 +77,8 @@ All required assets can be found in the [Spark & Reachy Photo Booth repository](
* **Estimated time:** 2 hours including hardware setup, container building, and model downloads * **Estimated time:** 2 hours including hardware setup, container building, and model downloads
* **Risk level:** Medium * **Risk level:** Medium
* **Rollback:** Docker containers can be stopped and removed to free resources. Downloaded models can be deleted from cache directories. Robot and peripheral connections can be safely disconnected. Network configurations can be reverted by removing custom settings. * **Rollback:** Docker containers can be stopped and removed to free resources. Downloaded models can be deleted from cache directories. Robot and peripheral connections can be safely disconnected. Network configurations can be reverted by removing custom settings.
* **Last Updated:** 04/01/2026 * **Last Updated:** 01/27/2026
* 1.0.0 First publication * 1.0.0 First Publication
* 1.0.1 Documentation improvements
## Governing terms ## Governing terms
Your use of the Spark Playbook scripts is governed by [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) and enables use of separate open source and proprietary software governed by their respective licenses: [Flux.1-Kontext NIM](https://catalog.ngc.nvidia.com/orgs/nim/teams/black-forest-labs/containers/flux.1-kontext-dev?version=1.1), [Parakeet 1.1b CTC en-US ASR NIM](https://catalog.ngc.nvidia.com/orgs/nim/teams/nvidia/containers/parakeet-1-1b-ctc-en-us?version=1.4), [TensorRT-LLM](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release?version=1.3.0rc1), [minio/minio](https://hub.docker.com/r/minio/minio), [arizephoenix/phoenix](https://hub.docker.com/r/arizephoenix/phoenix), [grafana/otel-lgtm](https://hub.docker.com/r/grafana/otel-lgtm), [Python](https://hub.docker.com/_/python), [Node.js](https://hub.docker.com/_/node), [nginx](https://hub.docker.com/_/nginx), [busybox](https://hub.docker.com/_/busybox), [UV Python Packager](https://docs.astral.sh/uv/), [Redpanda](https://www.redpanda.com/), [Redpanda Console](https://www.redpanda.com/), [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b), [FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev), [FLUX.1-Kontext-dev-onnx](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev-onnx). Your use of the Spark Playbook scripts is governed by [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) and enables use of separate open source and proprietary software governed by their respective licenses: [Flux.1-Kontext NIM](https://catalog.ngc.nvidia.com/orgs/nim/teams/black-forest-labs/containers/flux.1-kontext-dev?version=1.1), [Parakeet 1.1b CTC en-US ASR NIM](https://catalog.ngc.nvidia.com/orgs/nim/teams/nvidia/containers/parakeet-1-1b-ctc-en-us?version=1.4), [TensorRT-LLM](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tensorrt-llm/containers/release?version=1.3.0rc1), [minio/minio](https://hub.docker.com/r/minio/minio), [arizephoenix/phoenix](https://hub.docker.com/r/arizephoenix/phoenix), [grafana/otel-lgtm](https://hub.docker.com/r/grafana/otel-lgtm), [Python](https://hub.docker.com/_/python), [Node.js](https://hub.docker.com/_/node), [nginx](https://hub.docker.com/_/nginx), [busybox](https://hub.docker.com/_/busybox), [UV Python Packager](https://docs.astral.sh/uv/), [Redpanda](https://www.redpanda.com/), [Redpanda Console](https://www.redpanda.com/), [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b), [FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev), [FLUX.1-Kontext-dev-onnx](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev-onnx).
@ -280,7 +277,7 @@ uv sync --all-packages
Every folder suffixed by `-service` is a standalone Python program that runs in its own container. You must always start the services by interacting with the `docker-compose.yaml` at the root of the repository. You can enable code hot reloading for all the Python services by running: Every folder suffixed by `-service` is a standalone Python program that runs in its own container. You must always start the services by interacting with the `docker-compose.yaml` at the root of the repository. You can enable code hot reloading for all the Python services by running:
```bash ```bash
docker compose up --build --watch docker compose up -d --build --watch
``` ```
Whenever you change some Python code in the repository the associated container will be updated and automatically restarted. Whenever you change some Python code in the repository the associated container will be updated and automatically restarted.
@ -318,7 +315,6 @@ The [Writing Your First Service](https://github.com/NVIDIA/spark-reachy-photo-bo
|---------|-------|-----| |---------|-------|-----|
| No audio from robot (low volume) | Reachy speaker volume set too low by default | Increase Reachy speaker volume to maximum | | No audio from robot (low volume) | Reachy speaker volume set too low by default | Increase Reachy speaker volume to maximum |
| No audio from robot (device conflict) | Another application capturing Reachy speaker | Check `animation-compositor` logs for "Error querying device (-1)", verify Reachy speaker is not set as system default in Ubuntu sound settings, ensure no other apps are capturing the speaker, then restart the demo | | No audio from robot (device conflict) | Another application capturing Reachy speaker | Check `animation-compositor` logs for "Error querying device (-1)", verify Reachy speaker is not set as system default in Ubuntu sound settings, ensure no other apps are capturing the speaker, then restart the demo |
| Image-generation fails on first start | Transient initialization issue | Rerun `docker compose up --build -d` to resolve the issue |
If you have any issues with Reachy that are not covered by this guide, please read [Hugging Face's official troubleshooting guide](https://huggingface.co/docs/reachy_mini/troubleshooting). If you have any issues with Reachy that are not covered by this guide, please read [Hugging Face's official troubleshooting guide](https://huggingface.co/docs/reachy_mini/troubleshooting).

View File

@ -442,7 +442,7 @@ Replace the IP addresses with your actual node IPs.
On **each node** (primary and worker), run the following command to start the TRT-LLM container: On **each node** (primary and worker), run the following command to start the TRT-LLM container:
```bash ```bash
docker run -d --rm \ docker run -d --rm \
--name trtllm-multinode \ --name trtllm-multinode \
--gpus '"device=all"' \ --gpus '"device=all"' \
--network host \ --network host \
@ -456,11 +456,9 @@ On **each node** (primary and worker), run the following command to start the TR
-e OMPI_MCA_rmaps_ppr_n_pernode="1" \ -e OMPI_MCA_rmaps_ppr_n_pernode="1" \
-e OMPI_ALLOW_RUN_AS_ROOT="1" \ -e OMPI_ALLOW_RUN_AS_ROOT="1" \
-e OMPI_ALLOW_RUN_AS_ROOT_CONFIRM="1" \ -e OMPI_ALLOW_RUN_AS_ROOT_CONFIRM="1" \
-e CPATH=/usr/local/cuda/include \
-e TRITON_PTXAS_PATH=/usr/local/cuda/bin/ptxas \
-v ~/.cache/huggingface/:/root/.cache/huggingface/ \ -v ~/.cache/huggingface/:/root/.cache/huggingface/ \
-v ~/.ssh:/tmp/.ssh:ro \ -v ~/.ssh:/tmp/.ssh:ro \
nvcr.io/nvidia/tensorrt-llm/release:1.3.0rc5 \ nvcr.io/nvidia/tensorrt-llm/release:1.2.0rc6 \
sh -c "curl https://raw.githubusercontent.com/NVIDIA/dgx-spark-playbooks/refs/heads/main/nvidia/trt-llm/assets/trtllm-mn-entrypoint.sh | sh" sh -c "curl https://raw.githubusercontent.com/NVIDIA/dgx-spark-playbooks/refs/heads/main/nvidia/trt-llm/assets/trtllm-mn-entrypoint.sh | sh"
``` ```
@ -479,7 +477,7 @@ You should see output similar to:
``` ```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abc123def456 nvcr.io/nvidia/tensorrt-llm/release:1.3.0rc5 "sh -c 'curl https:…" 10 seconds ago Up 8 seconds trtllm-multinode abc123def456 nvcr.io/nvidia/tensorrt-llm/release:1.2.0rc6 "sh -c 'curl https:…" 10 seconds ago Up 8 seconds trtllm-multinode
``` ```
### Step 6. Copy hostfile to primary container ### Step 6. Copy hostfile to primary container

View File

@ -27,8 +27,8 @@ services:
# Ollama configuration # Ollama configuration
- OLLAMA_BASE_URL=http://ollama:11434/v1 - OLLAMA_BASE_URL=http://ollama:11434/v1
- OLLAMA_MODEL=llama3.1:8b - OLLAMA_MODEL=llama3.1:8b
# vLLM disabled in default Ollama mode # Disable vLLM
# - VLLM_BASE_URL=http://localhost:8001/v1 - VLLM_BASE_URL=http://localhost:8001/v1
- VLLM_MODEL=disabled - VLLM_MODEL=disabled
# Vector DB configuration # Vector DB configuration
- QDRANT_URL=http://qdrant:6333 - QDRANT_URL=http://qdrant:6333

View File

@ -108,7 +108,7 @@ export class TextProcessor {
// Determine which LLM provider to use based on configuration // Determine which LLM provider to use based on configuration
// Priority: vLLM > NVIDIA > Ollama // Priority: vLLM > NVIDIA > Ollama
if (process.env.VLLM_BASE_URL && process.env.VLLM_MODEL && process.env.VLLM_MODEL !== 'disabled') { if (process.env.VLLM_BASE_URL) {
this.selectedLLMProvider = 'vllm'; this.selectedLLMProvider = 'vllm';
} else if (process.env.NVIDIA_API_KEY) { } else if (process.env.NVIDIA_API_KEY) {
this.selectedLLMProvider = 'nvidia'; this.selectedLLMProvider = 'nvidia';

View File

@ -54,11 +54,6 @@ The following models are supported with vLLM on Spark. All listed models are ava
| Model | Quantization | Support Status | HF Handle | | Model | Quantization | Support Status | HF Handle |
|-------|-------------|----------------|-----------| |-------|-------------|----------------|-----------|
| **Gemma 4 31B IT** | Base | ✅ | [`google/gemma-4-31B-it`](https://huggingface.co/google/gemma-4-31B-it) |
| **Gemma 4 31B IT** | NVFP4 | ✅ | [`nvidia/Gemma-4-31B-IT-NVFP4`](https://huggingface.co/nvidia/Gemma-4-31B-IT-NVFP4) |
| **Gemma 4 26B A4B IT** | Base | ✅ | [`google/gemma-4-26B-A4B-it`](https://huggingface.co/google/gemma-4-26B-A4B-it) |
| **Gemma 4 E4B IT** | Base | ✅ | [`google/gemma-4-E4B-it`](https://huggingface.co/google/gemma-4-E4B-it) |
| **Gemma 4 E2B IT** | Base | ✅ | [`google/gemma-4-E2B-it`](https://huggingface.co/google/gemma-4-E2B-it) |
| **Nemotron-3-Super-120B** | NVFP4 | ✅ | [`nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4`](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4) | | **Nemotron-3-Super-120B** | NVFP4 | ✅ | [`nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4`](https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4) |
| **GPT-OSS-20B** | MXFP4 | ✅ | [`openai/gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) | | **GPT-OSS-20B** | MXFP4 | ✅ | [`openai/gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) |
| **GPT-OSS-120B** | MXFP4 | ✅ | [`openai/gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) | | **GPT-OSS-120B** | MXFP4 | ✅ | [`openai/gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) |
@ -94,8 +89,9 @@ Reminder: not all model architectures are supported for NVFP4 quantization.
* **Duration:** 30 minutes for Docker approach * **Duration:** 30 minutes for Docker approach
* **Risks:** Container registry access requires internal credentials * **Risks:** Container registry access requires internal credentials
* **Rollback:** Container approach is non-destructive. * **Rollback:** Container approach is non-destructive.
* **Last Updated:** 04/02/2026 * **Last Updated:** 03/12/2026
* Add support for Gemma 4 model family * Added support for Nemotron-3-Super-120B model
* Updated container to Feb 2026 release (26.02-py3)
## Instructions ## Instructions
@ -121,21 +117,13 @@ Find the latest container build from https://catalog.ngc.nvidia.com/orgs/nvidia/
```bash ```bash
export LATEST_VLLM_VERSION=<latest_container_version> export LATEST_VLLM_VERSION=<latest_container_version>
## example ## example
## export LATEST_VLLM_VERSION=26.02-py3 ## export LATEST_VLLM_VERSION=26.02-py3
export HF_MODEL_HANDLE=<HF_HANDLE>
## example
## export HF_MODEL_HANDLE=openai/gpt-oss-20b
docker pull nvcr.io/nvidia/vllm:${LATEST_VLLM_VERSION} docker pull nvcr.io/nvidia/vllm:${LATEST_VLLM_VERSION}
``` ```
For Gemma 4 model family, use vLLM custom containers:
```bash
docker pull vllm/vllm-openai:gemma4-cu130
```
## Step 3. Test vLLM in container ## Step 3. Test vLLM in container
Launch the container and start vLLM server with a test model to verify basic functionality. Launch the container and start vLLM server with a test model to verify basic functionality.
@ -143,13 +131,7 @@ Launch the container and start vLLM server with a test model to verify basic fun
```bash ```bash
docker run -it --gpus all -p 8000:8000 \ docker run -it --gpus all -p 8000:8000 \
nvcr.io/nvidia/vllm:${LATEST_VLLM_VERSION} \ nvcr.io/nvidia/vllm:${LATEST_VLLM_VERSION} \
vllm serve ${HF_MODEL_HANDLE} vllm serve "Qwen/Qwen2.5-Math-1.5B-Instruct"
```
To run models from Gemma 4 model family, (e.g. `google/gemma-4-31B-it`):
```bash
docker run -it --gpus all -p 8000:8000 \
vllm/vllm-openai:gemma4-cu130 ${HF_MODEL_HANDLE}
``` ```
Expected output should include: Expected output should include:
@ -163,7 +145,7 @@ In another terminal, test the server:
curl http://localhost:8000/v1/chat/completions \ curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"model": "'"${HF_MODEL_HANDLE}"'", "model": "Qwen/Qwen2.5-Math-1.5B-Instruct",
"messages": [{"role": "user", "content": "12*17"}], "messages": [{"role": "user", "content": "12*17"}],
"max_tokens": 500 "max_tokens": 500
}' }'