mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-23 18:33:54 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
3c1b873c69
commit
2176f83be0
@ -26,12 +26,12 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
|||||||
- [DGX Dashboard](nvidia/dgx-dashboard/)
|
- [DGX Dashboard](nvidia/dgx-dashboard/)
|
||||||
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
|
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
|
||||||
- [Optimized JAX](nvidia/jax/)
|
- [Optimized JAX](nvidia/jax/)
|
||||||
- [Llama Factory](nvidia/llama-factory/)
|
- [LLaMA Factory](nvidia/llama-factory/)
|
||||||
- [MONAI-Reasoning-CXR-3B Model](nvidia/monai-reasoning/)
|
- [MONAI-Reasoning-CXR-3B Model](nvidia/monai-reasoning/)
|
||||||
- [Build and Deploy a Multi-Agent Chatbot](nvidia/multi-agent-chatbot/)
|
- [Build and Deploy a Multi-Agent Chatbot](nvidia/multi-agent-chatbot/)
|
||||||
- [Multi-modal Inference](nvidia/multi-modal-inference/)
|
- [Multi-modal Inference](nvidia/multi-modal-inference/)
|
||||||
- [NCCL for Two Sparks](nvidia/nccl/)
|
- [NCCL for Two Sparks](nvidia/nccl/)
|
||||||
- [Fine tune with Nemo](nvidia/nemo-fine-tune/)
|
- [Fine-tune with NeMo](nvidia/nemo-fine-tune/)
|
||||||
- [Use a NIM on Spark](nvidia/nim-llm/)
|
- [Use a NIM on Spark](nvidia/nim-llm/)
|
||||||
- [Quantize to NVFP4](nvidia/nvfp4-quantization/)
|
- [Quantize to NVFP4](nvidia/nvfp4-quantization/)
|
||||||
- [Ollama](nvidia/ollama/)
|
- [Ollama](nvidia/ollama/)
|
||||||
@ -43,7 +43,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
|||||||
- [SGLang Inference Server](nvidia/sglang/)
|
- [SGLang Inference Server](nvidia/sglang/)
|
||||||
- [Speculative Decoding](nvidia/speculative-decoding/)
|
- [Speculative Decoding](nvidia/speculative-decoding/)
|
||||||
- [Stack two Sparks](nvidia/stack-sparks/)
|
- [Stack two Sparks](nvidia/stack-sparks/)
|
||||||
- [Setup Tailscale on your Spark](nvidia/tailscale/)
|
- [Set up Tailscale on your Spark](nvidia/tailscale/)
|
||||||
- [TRT LLM for Inference](nvidia/trt-llm/)
|
- [TRT LLM for Inference](nvidia/trt-llm/)
|
||||||
- [Text to Knowledge Graph](nvidia/txt2kg/)
|
- [Text to Knowledge Graph](nvidia/txt2kg/)
|
||||||
- [Unsloth on DGX Spark](nvidia/unsloth/)
|
- [Unsloth on DGX Spark](nvidia/unsloth/)
|
||||||
|
|||||||
@ -1,6 +1,6 @@
|
|||||||
# Llama Factory
|
# LLaMA Factory
|
||||||
|
|
||||||
> Install and fine-tune models with LLama Factory
|
> Install and fine-tune models with LLaMA Factory
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
|
|||||||
@ -1,4 +1,4 @@
|
|||||||
# Fine tune with Nemo
|
# Fine-tune with NeMo
|
||||||
|
|
||||||
> Use NVIDIA NeMo to fine-tune models locally
|
> Use NVIDIA NeMo to fine-tune models locally
|
||||||
|
|
||||||
|
|||||||
@ -1,4 +1,4 @@
|
|||||||
# Setup Tailscale on your Spark
|
# Set up Tailscale on your Spark
|
||||||
|
|
||||||
> Use Tailscale to connect to your Spark on your home network no matter where you are
|
> Use Tailscale to connect to your Spark on your home network no matter where you are
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user