chore: Regenerate all playbooks

This commit is contained in:
GitLab CI 2025-10-08 14:03:31 +00:00
parent ab2ca0fcf1
commit 332aaf1202

View File

@ -5,7 +5,7 @@
## Table of Contents ## Table of Contents
- [Overview](#overview) - [Overview](#overview)
- [Basic Idea](#basic-idea) - [Basic idea](#basic-idea)
- [What you'll accomplish](#what-youll-accomplish) - [What you'll accomplish](#what-youll-accomplish)
- [What to know before starting](#what-to-know-before-starting) - [What to know before starting](#what-to-know-before-starting)
- [Prerequisites](#prerequisites) - [Prerequisites](#prerequisites)
@ -17,7 +17,7 @@
## Overview ## Overview
### Basic Idea ### Basic idea
NVIDIA Inference Microservices (NIMs) provide optimized containers for deploying large language NVIDIA Inference Microservices (NIMs) provide optimized containers for deploying large language
models with simplified APIs. This playbook demonstrates how to run LLM NIMs on DGX Spark devices, models with simplified APIs. This playbook demonstrates how to run LLM NIMs on DGX Spark devices,
@ -44,11 +44,11 @@ completions.
```bash ```bash
nvidia-smi nvidia-smi
``` ```
- Docker with NVIDIA Container Toolkit configured, instructions here: https://******.nvidia.com/dgx-docs/review/621/dgx-spark/latest/nvidia-container-runtime-for-docker.html - Docker with NVIDIA Container Toolkit configured, instructions [here](https://******.nvidia.com/dgx-docs/review/621/dgx-spark/latest/nvidia-container-runtime-for-docker.html)
```bash ```bash
docker run -it --gpus=all nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 nvidia-smi docker run -it --gpus=all nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 nvidia-smi
``` ```
- NGC account with API key from https://ngc.nvidia.com/setup/api-key - NGC account with API key from [here](https://ngc.nvidia.com/setup/api-key)
```bash ```bash
echo $NGC_API_KEY | grep -E '^[a-zA-Z0-9]{86}==' echo $NGC_API_KEY | grep -E '^[a-zA-Z0-9]{86}=='
``` ```