mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-22 01:53:53 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
e58d7eeb90
commit
7bc85ebcc9
@ -22,7 +22,7 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
||||
### NVIDIA
|
||||
|
||||
- [Comfy UI](nvidia/comfy-ui/)
|
||||
- [Connect to Your Spark from Another Computer](nvidia/connect-to-your-spark/)
|
||||
- [Set Up Local Network Access](nvidia/connect-to-your-spark/)
|
||||
- [DGX Dashboard](nvidia/dgx-dashboard/)
|
||||
- [FLUX.1 Dreambooth LoRA Fine-tuning](nvidia/flux-finetuning/)
|
||||
- [Optimized JAX](nvidia/jax/)
|
||||
@ -33,9 +33,9 @@ Each playbook includes prerequisites, step-by-step instructions, troubleshooting
|
||||
- [NCCL for Two Sparks](nvidia/nccl/)
|
||||
- [Fine-tune with NeMo](nvidia/nemo-fine-tune/)
|
||||
- [Use a NIM on Spark](nvidia/nim-llm/)
|
||||
- [Quantize to NVFP4](nvidia/nvfp4-quantization/)
|
||||
- [NVFP4 Quantization](nvidia/nvfp4-quantization/)
|
||||
- [Ollama](nvidia/ollama/)
|
||||
- [Use Open WebUI with Ollama](nvidia/open-webui/)
|
||||
- [Open WebUI with Ollama](nvidia/open-webui/)
|
||||
- [Use Open Fold](nvidia/protein-folding/)
|
||||
- [Fine tune with Pytorch](nvidia/pytorch-fine-tune/)
|
||||
- [RAG application in AI Workbench](nvidia/rag-ai-workbench/)
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Connect to Your Spark from Another Computer
|
||||
# Set Up Local Network Access
|
||||
|
||||
> Use NVIDIA Sync or manual SSH to connect to your Spark
|
||||
> NVIDIA Sync helps set up and configure SSH access
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# DGX Dashboard
|
||||
|
||||
> Manage your DGX system and launch JupyterLab
|
||||
> Monitor your DGX system and launch JupyterLab
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# FLUX.1 Dreambooth LoRA Fine-tuning
|
||||
|
||||
> Fine-tune FLUX.1-dev 12B model using multi-concept Dreambooth LoRA for custom image generation
|
||||
> Fine-tune FLUX.1-dev 12B model using Dreambooth LoRA for custom image generation
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Optimized JAX
|
||||
|
||||
> Develop with Optimized JAX
|
||||
> Optimize JAX to Run on Spark
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# Quantize to NVFP4
|
||||
# NVFP4 Quantization
|
||||
|
||||
> Quantize a model to NVFP4 to run on Spark using TensorRT Model Optimizer
|
||||
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
# Use Open WebUI with Ollama
|
||||
# Open WebUI with Ollama
|
||||
|
||||
> Install Open WebUI and use Ollama to chat with models on your Spark
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Speculative Decoding
|
||||
|
||||
> Learn how to setup speculative decoding for fast inference on Spark
|
||||
> Learn how to set up speculative decoding for fast inference on Spark
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
# Install VS Code
|
||||
|
||||
> Install and use VS Code locally or remotely on Spark
|
||||
> Install and use VS Code locally or remotely
|
||||
|
||||
## Table of Contents
|
||||
|
||||
|
||||
Loading…
Reference in New Issue
Block a user