mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-22 18:13:52 +00:00
1.0 KiB
1.0 KiB
| name | description |
|---|---|
| dgx-spark-speculative-decoding | Learn how to set up speculative decoding for fast inference on Spark — on NVIDIA DGX Spark. Use when setting up speculative-decoding on Spark hardware. |
Speculative Decoding
Learn how to set up speculative decoding for fast inference on Spark
Speculative decoding speeds up text generation by using a small, fast model to draft several tokens ahead, then having the larger model quickly verify or adjust them. This way, the big model doesn't need to predict every token step-by-step, reducing latency while keeping output quality.
Outcome: You'll explore speculative decoding using TensorRT-LLM on NVIDIA Spark using two approaches: EAGLE-3 and Draft-Target. These examples demonstrate how to accelerate large language model inference while maintaining output quality.
Full playbook: /home/runner/work/dgx-spark-playbooks/dgx-spark-playbooks/nvidia/speculative-decoding/README.md