Collection of step-by-step playbooks for setting up AI/ML workloads on NVIDIA DGX Spark devices with Blackwell architecture.
Go to file
Santosh Bhavani 529debb633 perf(docker): increase Ollama parallel processing for DGX
- Increase OLLAMA_NUM_PARALLEL from 1 to 4 requests
- Leverage DGX Spark's unified memory architecture
- Improve throughput for concurrent inference requests
2025-10-19 20:56:58 -07:00
nvidia perf(docker): increase Ollama parallel processing for DGX 2025-10-19 20:56:58 -07:00
src/images chore: Regenerate all playbooks 2025-10-03 20:46:11 +00:00
LICENSE chore: Regenerate all playbooks 2025-10-03 20:46:11 +00:00
README.md chore: Regenerate all playbooks 2025-10-17 17:29:40 +00:00

NVIDIA DGX Spark

DGX Spark Playbooks

Collection of step-by-step playbooks for setting up AI/ML workloads on NVIDIA DGX Spark devices with Blackwell architecture.

About

These playbooks provide detailed instructions for:

  • Installing and configuring popular AI frameworks
  • Running inference with optimized models
  • Setting up development environments
  • Connecting and managing your DGX Spark device

Each playbook includes prerequisites, step-by-step instructions, troubleshooting guidance, and example code.

Available Playbooks

NVIDIA

Resources

License

See: