chore: Regenerate all playbooks

This commit is contained in:
GitLab CI 2025-10-28 14:35:31 +00:00
parent bb62d46702
commit af0833dd53
5 changed files with 7 additions and 7 deletions

View File

@ -37,10 +37,10 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
- [NVFP4 Quantization](nvidia/nvfp4-quantization/)
- [Ollama](nvidia/ollama/)
- [Open WebUI with Ollama](nvidia/open-webui/)
- [Fine tune with Pytorch](nvidia/pytorch-fine-tune/)
- [RAG application in AI Workbench](nvidia/rag-ai-workbench/)
- [Fine-tune with Pytorch](nvidia/pytorch-fine-tune/)
- [RAG Application in AI Workbench](nvidia/rag-ai-workbench/)
- [Speculative Decoding](nvidia/speculative-decoding/)
- [Set up Tailscale on your Spark](nvidia/tailscale/)
- [Set up Tailscale on Your Spark](nvidia/tailscale/)
- [TRT LLM for Inference](nvidia/trt-llm/)
- [Text to Knowledge Graph](nvidia/txt2kg/)
- [Unsloth on DGX Spark](nvidia/unsloth/)

View File

@ -1,6 +1,6 @@
# Optimized JAX
> Optimize JAX to Run on Spark
> Optimize JAX to run on Spark
## Table of Contents

View File

@ -1,4 +1,4 @@
# Fine tune with Pytorch
# Fine-tune with Pytorch
> Use Pytorch to fine-tune models locally

View File

@ -1,4 +1,4 @@
# RAG application in AI Workbench
# RAG Application in AI Workbench
> Install and use AI Workbench to clone and run a reproducible RAG application

View File

@ -1,4 +1,4 @@
# Set up Tailscale on your Spark
# Set up Tailscale on Your Spark
> Use Tailscale to connect to your Spark on your home network no matter where you are