mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-26 11:53:53 +00:00
Compare commits
1 Commits
fd1510e368
...
367f892cf2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
367f892cf2 |
@ -39,7 +39,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
||||
- [Connect Multiple DGX Spark through a Switch](nvidia/multi-sparks-through-switch/)
|
||||
- [NCCL for Two Sparks](nvidia/nccl/)
|
||||
- [Fine-tune with NeMo](nvidia/nemo-fine-tune/)
|
||||
- [NemoClaw with Nemotron 3 Super and Telegram on DGX Spark](nvidia/nemoclaw/)
|
||||
- [NemoClaw with Nemotron-3-Super and Telegram on DGX Spark](nvidia/nemoclaw/)
|
||||
- [Nemotron-3-Nano with llama.cpp](nvidia/nemotron/)
|
||||
- [NIM on Spark](nvidia/nim-llm/)
|
||||
- [NVFP4 Quantization](nvidia/nvfp4-quantization/)
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# NemoClaw with Nemotron 3 Super and Telegram on DGX Spark
|
||||
# NemoClaw with Nemotron-3-Super and Telegram on DGX Spark
|
||||
|
||||
> Install NemoClaw on DGX Spark with local Ollama inference and Telegram bot integration
|
||||
|
||||
@ -372,15 +372,7 @@ Open Telegram, find [@BotFather](https://t.me/BotFather), send `/newbot`, and fo
|
||||
|
||||
Make sure you are on the **host** (not inside the sandbox). If you are inside the sandbox, run `exit` first.
|
||||
|
||||
Set the required environment variables. Replace the placeholders with your actual values. `SANDBOX_NAME` must match the sandbox name you chose during the onboard wizard:
|
||||
|
||||
```bash
|
||||
export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
||||
export SANDBOX_NAME=my-assistant
|
||||
export NVIDIA_API_KEY=<your-nvidia-api-key>
|
||||
```
|
||||
|
||||
Add the Telegram network policy to the sandbox:
|
||||
Add the Telegram network policy to the sandbox so it can reach the Telegram API:
|
||||
|
||||
```bash
|
||||
nemoclaw my-assistant policy-add
|
||||
@ -388,7 +380,7 @@ nemoclaw my-assistant policy-add
|
||||
|
||||
When prompted, select `telegram` and hit **Y** to confirm.
|
||||
|
||||
Start the Telegram bridge.
|
||||
Set the bot token and start auxiliary services:
|
||||
|
||||
```bash
|
||||
export TELEGRAM_BOT_TOKEN=<your-bot-token>
|
||||
|
||||
@ -685,7 +685,6 @@ docker rmi ghcr.io/open-webui/open-webui:main
|
||||
| "invalid mount config for type 'bind'" | Missing or non-executable entrypoint script | Run `docker inspect <container_id>` to see full error message. Verify `trtllm-mn-entrypoint.sh` exists on both nodes in your home directory (`ls -la $HOME/trtllm-mn-entrypoint.sh`) and has executable permissions (`chmod +x $HOME/trtllm-mn-entrypoint.sh`) |
|
||||
| "task: non-zero exit (255)" | Container exit with error code 255 | Check container logs with `docker ps -a --filter "name=trtllm-multinode_trtllm"` to get container ID, then `docker logs <container_id>` to see detailed error messages |
|
||||
| Docker state stuck in "Pending" with "no suitable node (insufficien...)" | Docker daemon not properly configured for GPU access | Verify steps 2-4 were completed successfully and check that `/etc/docker/daemon.json` contains correct GPU configuration |
|
||||
| Serving model fails `ptxas fatal` errors | Model needs runtime triton kernel compilation | In Step 10, add `-x TRITON_PTXAS_PATH` to your `mpirun` command |
|
||||
|
||||
> [!NOTE]
|
||||
> DGX Spark uses a Unified Memory Architecture (UMA), which enables dynamic memory sharing between the GPU and CPU.
|
||||
|
||||
Loading…
Reference in New Issue
Block a user