diff --git a/nvidia/dgx-dashboard/README.md b/nvidia/dgx-dashboard/README.md index 60971ad..f45db66 100644 --- a/nvidia/dgx-dashboard/README.md +++ b/nvidia/dgx-dashboard/README.md @@ -126,6 +126,11 @@ Verify your setup by running a simple Stable Diffusion XL image generation examp 3. Add a new cell and paste the following code: ```python +import warnings +warnings.filterwarnings('ignore', message='.*cuda capability.*') +import tqdm.auto +tqdm.auto.tqdm = tqdm.std.tqdm + from diffusers import DiffusionPipeline import torch from PIL import Image diff --git a/nvidia/trt-llm/README.md b/nvidia/trt-llm/README.md index 806fc5b..8b4a422 100644 --- a/nvidia/trt-llm/README.md +++ b/nvidia/trt-llm/README.md @@ -414,7 +414,7 @@ docker rmi nvcr.io/nvidia/tensorrt-llm/release:spark-single-gpu-dev ### Step 1. Configure network connectivity -Follow the network setup instructions from the [Connect two Sparks](https://build.nvidia.com/spark/stack-sparks/stacked-sparks) playbook to establish connectivity between your DGX Spark nodes. +Follow the network setup instructions from the [Connect two Sparks](https://build.nvidia.com/spark/connect-two-sparks/stacked-sparks) playbook to establish connectivity between your DGX Spark nodes. This includes: - Physical QSFP cable connection