From 0d492abd3f55576b000494ba79c6f31d6a2dd36b Mon Sep 17 00:00:00 2001 From: GitLab CI Date: Sat, 4 Oct 2025 20:32:16 +0000 Subject: [PATCH] chore: Regenerate all playbooks --- nvidia/comfy-ui/README.md | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/nvidia/comfy-ui/README.md b/nvidia/comfy-ui/README.md index 70e5362..31d885a 100644 --- a/nvidia/comfy-ui/README.md +++ b/nvidia/comfy-ui/README.md @@ -13,13 +13,14 @@ ## Basic idea -ComfyUI is an open-source web server application for AI image generation using diffusion based models like SDXL, Flux and others. +ComfyUI is an open-source web server application for AI image generation using diffusion-based models like SDXL, Flux and others. It has a browser-based UI that lets you create, edit and run image generation and editing workflows with multiple steps. Generation and editing steps (e.g. loading a model, adding text or sampling) are configurable in the UI as a node, and you connect nodes with wires to form a workflow. -Workflows are saved as JSON files, so you can version them for future work, collaboration and reproducibility. ComfyUI uses the host's GPU for inference, so you can install it on your Spark and do all of your image generation and editing directly on device. +Workflows are saved as JSON files, so you can version them for future work, collaboration and reproducibility. + ## What you'll accomplish You'll install and configure ComfyUI on your NVIDIA DGX Spark device so you can use the unified memory to work with large models. @@ -35,17 +36,17 @@ You'll install and configure ComfyUI on your NVIDIA DGX Spark device so you can ## Prerequisites **Hardware Requirements:** -- [ ] NVIDIA Spark device with Blackwell architecture -- [ ] Minimum 8GB GPU memory for Stable Diffusion models -- [ ] At least 20GB available storage space +- NVIDIA Spark device with Blackwell architecture +- Minimum 8GB GPU memory for Stable Diffusion models +- At least 20GB available storage space **Software Requirements:** -- [ ] Python 3.8 or higher installed: `python3 --version` -- [ ] pip package manager available: `pip3 --version` -- [ ] CUDA toolkit compatible with Blackwell: `nvcc --version` -- [ ] Git version control: `git --version` -- [ ] Network access to download models from Hugging Face -- [ ] Web browser access to `:8188` port +- Python 3.8 or higher installed: `python3 --version` +- pip package manager available: `pip3 --version` +- CUDA toolkit compatible with Blackwell: `nvcc --version` +- Git version control: `git --version` +- Network access to download models from Hugging Face +- Web browser access to `:8188` port ## Ancillary files @@ -80,7 +81,7 @@ nvcc --version nvidia-smi ``` -Expected output should show Python 3.8+, pip available, CUDA toolkit, and GPU detection. +Expected output should show Python 3.8+, pip available, CUDA toolkit and GPU detection. ## Step 2. Create Python virtual environment @@ -165,6 +166,7 @@ Open a web browser and navigate to `http://:8188` where `` i | Web interface inaccessible | Firewall blocking port 8188 | Configure firewall to allow port 8188, check IP address | | Out of GPU memory errors | Insufficient VRAM for model | Use smaller models or enable CPU fallback mode | + ## Step 10. Optional - Cleanup and rollback If you need to remove the installation completely, follow these steps: