diff --git a/nvidia/vibe-coding/README.md b/nvidia/vibe-coding/README.md index cb5eb5a..474f04d 100644 --- a/nvidia/vibe-coding/README.md +++ b/nvidia/vibe-coding/README.md @@ -134,10 +134,10 @@ To connect a workstation running VSCode to a remote DGX Spark instance the follo - Select `Autodetect` as the Model. Continue **will** fail to detect the model as it is attempting to connect to a locally hosted Ollama server. - - Find the `**gear**` icon in the upper right corner of the Continue window and click on it. + - Find the `gear` icon in the upper right corner of the Continue window and click on it. - On the left pane, click **Models** - Next to the first dropdown menu under **Chat** click the gear icon. - - Continue's `**config.yaml**` will open. Take note of your DGX Spark's IP address. + - Continue's `config.yaml` will open. Take note of your DGX Spark's IP address. - Replace the configuration with the following. **YOUR_SPARK_IP** should be replaced with your DGX Spark's IP. @@ -170,7 +170,7 @@ Add additional model entries for any other Ollama models you wish to host remote | Symptom | Cause | Fix | |---------|-------|-----| |Ollama not starting|GPU drivers may not be installed correctly|Run `nvidia-smi` in the terminal. If the command fails check DGX Dashboard for updates to your DGX Spark.| -|Continue can't connect over the network|Port 11434 may not be open or accessible|Run command `ss -tuln | grep 11434`. If the output does not reflect " tcp LISTEN 0 4096 *:11434 *:* ", go back to step 2 and run the ufw command.| +|Continue can't connect over the network|Port 11434 may not be open or accessible|Run command `ss -tuln | grep 11434`. If the output does not reflect " tcp LISTEN 0 4096 *:11434 *:* ", go back to step 2 and run the ufw command.| |Continue can't detect a locally running Ollama model|Configuration not properly set or detected|Check `OLLAMA_HOST` and `OLLAMA_ORIGINS` in `/etc/systemd/system/ollama.service.d/override.conf` file. If `OLLAMA_HOST` and `OLLAMA_ORIGINS` are set correctly, add these lines to your `~/.bashrc` file.| |High memory usage|Model size too big|Confirm no other large models or containers are running with `nvidia-smi`. Use smaller models such as `gpt-oss:20b` for lightweight usage.|