From 1649b10d924501be2b2d92840e39f090a4b15bd3 Mon Sep 17 00:00:00 2001 From: GitLab CI Date: Wed, 8 Oct 2025 20:57:55 +0000 Subject: [PATCH] chore: Regenerate all playbooks --- nvidia/monai-reasoning/README.md | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/nvidia/monai-reasoning/README.md b/nvidia/monai-reasoning/README.md index 83b01e3..1abbb85 100644 --- a/nvidia/monai-reasoning/README.md +++ b/nvidia/monai-reasoning/README.md @@ -78,15 +78,14 @@ uname -m * **Estimated time:** 20-35 minutes (not including model download) * **Risk level:** Low. All steps use publicly available containers and models -* **Rollback:** The entire deployment is containerized. To roll back, you can simply stop -and remove the Docker containers - -## Instructions - -> **Note:** DGX Spark uses a Unified Memory Architecture (UMA), which enables dynamic memory sharing between the GPU and CPU. With many applications still updating to take advantage of UMA, you may encounter memory issues even when within the memory capacity of DGX Spark. If that happens, manually flush the buffer cache with: +* **Rollback:** The entire deployment is containerized. To roll back, you can simply stop and remove the Docker containers +* DGX Spark uses a Unified Memory Architecture (UMA), which enables dynamic memory sharing between the GPU and CPU. With many applications still updating to take advantage of UMA, you may encounter memory issues even when within the memory capacity of DGX Spark. If that happens, manually flush the buffer cache with: ```bash sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches' ``` + +## Instructions + ## Step 1. Create the Project Directory First, create a dedicated directory to store your model weights and configuration files. This