mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-23 10:33:51 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
3c3578c620
commit
8ca84d63e9
@ -134,10 +134,10 @@ To connect a workstation running VSCode to a remote DGX Spark instance the follo
|
|||||||
- Select `Autodetect` as the Model.
|
- Select `Autodetect` as the Model.
|
||||||
|
|
||||||
Continue **will** fail to detect the model as it is attempting to connect to a locally hosted Ollama server.
|
Continue **will** fail to detect the model as it is attempting to connect to a locally hosted Ollama server.
|
||||||
- Find the `**gear**` icon in the upper right corner of the Continue window and click on it.
|
- Find the `gear` icon in the upper right corner of the Continue window and click on it.
|
||||||
- On the left pane, click **Models**
|
- On the left pane, click **Models**
|
||||||
- Next to the first dropdown menu under **Chat** click the gear icon.
|
- Next to the first dropdown menu under **Chat** click the gear icon.
|
||||||
- Continue's `**config.yaml**` will open. Take note of your DGX Spark's IP address.
|
- Continue's `config.yaml` will open. Take note of your DGX Spark's IP address.
|
||||||
- Replace the configuration with the following. **YOUR_SPARK_IP** should be replaced with your DGX Spark's IP.
|
- Replace the configuration with the following. **YOUR_SPARK_IP** should be replaced with your DGX Spark's IP.
|
||||||
|
|
||||||
|
|
||||||
@ -170,7 +170,7 @@ Add additional model entries for any other Ollama models you wish to host remote
|
|||||||
| Symptom | Cause | Fix |
|
| Symptom | Cause | Fix |
|
||||||
|---------|-------|-----|
|
|---------|-------|-----|
|
||||||
|Ollama not starting|GPU drivers may not be installed correctly|Run `nvidia-smi` in the terminal. If the command fails check DGX Dashboard for updates to your DGX Spark.|
|
|Ollama not starting|GPU drivers may not be installed correctly|Run `nvidia-smi` in the terminal. If the command fails check DGX Dashboard for updates to your DGX Spark.|
|
||||||
|Continue can't connect over the network|Port 11434 may not be open or accessible|Run command `ss -tuln | grep 11434`. If the output does not reflect " tcp LISTEN 0 4096 *:11434 *:* ", go back to step 2 and run the ufw command.|
|
|Continue can't connect over the network|Port 11434 may not be open or accessible|Run command `ss -tuln | grep 11434`. If the output does not reflect " tcp LISTEN 0 4096 *:11434 *:* ", go back to step 2 and run the ufw command.|
|
||||||
|Continue can't detect a locally running Ollama model|Configuration not properly set or detected|Check `OLLAMA_HOST` and `OLLAMA_ORIGINS` in `/etc/systemd/system/ollama.service.d/override.conf` file. If `OLLAMA_HOST` and `OLLAMA_ORIGINS` are set correctly, add these lines to your `~/.bashrc` file.|
|
|Continue can't detect a locally running Ollama model|Configuration not properly set or detected|Check `OLLAMA_HOST` and `OLLAMA_ORIGINS` in `/etc/systemd/system/ollama.service.d/override.conf` file. If `OLLAMA_HOST` and `OLLAMA_ORIGINS` are set correctly, add these lines to your `~/.bashrc` file.|
|
||||||
|High memory usage|Model size too big|Confirm no other large models or containers are running with `nvidia-smi`. Use smaller models such as `gpt-oss:20b` for lightweight usage.|
|
|High memory usage|Model size too big|Confirm no other large models or containers are running with `nvidia-smi`. Use smaller models such as `gpt-oss:20b` for lightweight usage.|
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user