chore: Regenerate all playbooks

This commit is contained in:
GitLab CI 2025-10-08 16:21:21 +00:00
parent 3c1b873c69
commit 2176f83be0
4 changed files with 7 additions and 7 deletions

View File

@ -26,12 +26,12 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
- [DGX Dashboard](nvidia/dgx-dashboard/)
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
- [Optimized JAX](nvidia/jax/)
- [Llama Factory](nvidia/llama-factory/)
- [LLaMA Factory](nvidia/llama-factory/)
- [MONAI-Reasoning-CXR-3B Model](nvidia/monai-reasoning/)
- [Build and Deploy a Multi-Agent Chatbot](nvidia/multi-agent-chatbot/)
- [Multi-modal Inference](nvidia/multi-modal-inference/)
- [NCCL for Two Sparks](nvidia/nccl/)
- [Fine tune with Nemo](nvidia/nemo-fine-tune/)
- [Fine-tune with NeMo](nvidia/nemo-fine-tune/)
- [Use a NIM on Spark](nvidia/nim-llm/)
- [Quantize to NVFP4](nvidia/nvfp4-quantization/)
- [Ollama](nvidia/ollama/)
@ -43,7 +43,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
- [SGLang Inference Server](nvidia/sglang/)
- [Speculative Decoding](nvidia/speculative-decoding/)
- [Stack two Sparks](nvidia/stack-sparks/)
- [Setup Tailscale on your Spark](nvidia/tailscale/)
- [Set up Tailscale on your Spark](nvidia/tailscale/)
- [TRT LLM for Inference](nvidia/trt-llm/)
- [Text to Knowledge Graph](nvidia/txt2kg/)
- [Unsloth on DGX Spark](nvidia/unsloth/)

View File

@ -1,6 +1,6 @@
# Llama Factory
# LLaMA Factory
> Install and fine-tune models with LLama Factory
> Install and fine-tune models with LLaMA Factory
## Table of Contents

View File

@ -1,4 +1,4 @@
# Fine tune with Nemo
# Fine-tune with NeMo
> Use NVIDIA NeMo to fine-tune models locally

View File

@ -1,4 +1,4 @@
# Setup Tailscale on your Spark
# Set up Tailscale on your Spark
> Use Tailscale to connect to your Spark on your home network no matter where you are