chore: Regenerate all playbooks

This commit is contained in:
GitLab CI 2025-10-09 15:38:30 +00:00
parent e58d7eeb90
commit 7bc85ebcc9
9 changed files with 12 additions and 12 deletions

View File

@ -22,7 +22,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
### NVIDIA
- [Comfy UI](nvidia/comfy-ui/)
- [Connect to Your Spark from Another Computer](nvidia/connect-to-your-spark/)
- [Set Up Local Network Access](nvidia/connect-to-your-spark/)
- [DGX Dashboard](nvidia/dgx-dashboard/)
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
- [Optimized JAX](nvidia/jax/)
@ -33,9 +33,9 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
- [NCCL for Two Sparks](nvidia/nccl/)
- [Fine-tune with NeMo](nvidia/nemo-fine-tune/)
- [Use a NIM on Spark](nvidia/nim-llm/)
- [Quantize to NVFP4](nvidia/nvfp4-quantization/)
- [NVFP4 Quantization](nvidia/nvfp4-quantization/)
- [Ollama](nvidia/ollama/)
- [Use Open WebUI with Ollama](nvidia/open-webui/)
- [Open WebUI with Ollama](nvidia/open-webui/)
- [Use Open Fold](nvidia/protein-folding/)
- [Fine tune with Pytorch](nvidia/pytorch-fine-tune/)
- [RAG application in AI Workbench](nvidia/rag-ai-workbench/)

View File

@ -1,6 +1,6 @@
# Connect to Your Spark from Another Computer
# Set Up Local Network Access
> Use NVIDIA Sync or manual SSH to connect to your Spark
> NVIDIA Sync helps set up and configure SSH access
## Table of Contents

View File

@ -1,6 +1,6 @@
# DGX Dashboard
> Manage your DGX system and launch JupyterLab
> Monitor your DGX system and launch JupyterLab
## Table of Contents

View File

@ -1,6 +1,6 @@
# FLUX.1 Dreambooth LoRA Fine-tuning
> Fine-tune FLUX.1-dev 12B model using multi-concept Dreambooth LoRA for custom image generation
> Fine-tune FLUX.1-dev 12B model using Dreambooth LoRA for custom image generation
## Table of Contents

View File

@ -1,6 +1,6 @@
# Optimized JAX
> Develop with Optimized JAX
> Optimize JAX to Run on Spark
## Table of Contents

View File

@ -1,4 +1,4 @@
# Quantize to NVFP4
# NVFP4 Quantization
> Quantize a model to NVFP4 to run on Spark using TensorRT Model Optimizer

View File

@ -1,4 +1,4 @@
# Use Open WebUI with Ollama
# Open WebUI with Ollama
> Install Open WebUI and use Ollama to chat with models on your Spark

View File

@ -1,6 +1,6 @@
# Speculative Decoding
> Learn how to setup speculative decoding for fast inference on Spark
> Learn how to set up speculative decoding for fast inference on Spark
## Table of Contents

View File

@ -1,6 +1,6 @@
# Install VS Code
> Install and use VS Code locally or remotely on Spark
> Install and use VS Code locally or remotely
## Table of Contents