diff --git a/nvidia/cuda-x-data-science/README.md b/nvidia/cuda-x-data-science/README.md index 62338be..8d521e1 100644 --- a/nvidia/cuda-x-data-science/README.md +++ b/nvidia/cuda-x-data-science/README.md @@ -74,7 +74,7 @@ The other goes over an example of machine learning algorithms including UMAP and ``` If you are remotely accessing your DGX-Spark then make sure to forward the necesary port to access the notebook in your local browser. Use the below instruction for port fowarding ```bash - ssh -N -L YYYY:localhost:XXXX username@remote_host + ssh -N -L YYYY:localhost:XXXX username@remote_host ``` - `YYYY`: The local port you want to use (e.g. 8888) - `XXXX`: The port you specified when starting Jupyter Notebook on the remote machine (e.g. 8888) diff --git a/nvidia/flux-finetuning/README.md b/nvidia/flux-finetuning/README.md index 9ec35fb..28350c5 100644 --- a/nvidia/flux-finetuning/README.md +++ b/nvidia/flux-finetuning/README.md @@ -46,7 +46,7 @@ The setup includes: * **Risks**: * Docker permission issues may require user group changes and session restart * The recipe would require hyperparameter tuning and a high-quality dataset for the best results -**Rollback**: Stop and remove Docker containers, delete downloaded models if needed. +* **Rollback**: Stop and remove Docker containers, delete downloaded models if needed. ## Instructions diff --git a/nvidia/jax/README.md b/nvidia/jax/README.md index 90e38b0..62cef41 100644 --- a/nvidia/jax/README.md +++ b/nvidia/jax/README.md @@ -64,7 +64,7 @@ All required assets can be found [here on GitHub](https://github.com/NVIDIA/dgx- * **Risks:** * Package dependency conflicts in Python environment * Performance validation may require architecture-specific optimizations -**Rollback:** Container environments provide isolation; remove containers and restart to reset state. +* **Rollback:** Container environments provide isolation; remove containers and restart to reset state. ## Instructions diff --git a/nvidia/ollama/README.md b/nvidia/ollama/README.md index 917487c..dfd2939 100644 --- a/nvidia/ollama/README.md +++ b/nvidia/ollama/README.md @@ -113,10 +113,10 @@ Ollama server running on port 11434. This configuration runs on your local machi 1. Click the "Add New" button 2. Fill out the form with these values: - - **Name**: `Ollama Server` - - **Port**: `11434` - - **Auto open in browser**: Leave unchecked (this is an API, not a web interface) - - **Start Script**: Leave empty + - **Name**: `Ollama Server` + - **Port**: `11434` + - **Auto open in browser**: Leave unchecked (this is an API, not a web interface) + - **Start Script**: Leave empty 3. Click "Add" The new Ollama Server entry should now appear in your NVIDIA Sync custom apps list.