From 332aaf1202e1caa499fe4ae138692fefebac6c51 Mon Sep 17 00:00:00 2001 From: GitLab CI Date: Wed, 8 Oct 2025 14:03:31 +0000 Subject: [PATCH] chore: Regenerate all playbooks --- nvidia/nim-llm/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/nvidia/nim-llm/README.md b/nvidia/nim-llm/README.md index b482c8b..4e26de8 100644 --- a/nvidia/nim-llm/README.md +++ b/nvidia/nim-llm/README.md @@ -5,7 +5,7 @@ ## Table of Contents - [Overview](#overview) - - [Basic Idea](#basic-idea) + - [Basic idea](#basic-idea) - [What you'll accomplish](#what-youll-accomplish) - [What to know before starting](#what-to-know-before-starting) - [Prerequisites](#prerequisites) @@ -17,7 +17,7 @@ ## Overview -### Basic Idea +### Basic idea NVIDIA Inference Microservices (NIMs) provide optimized containers for deploying large language models with simplified APIs. This playbook demonstrates how to run LLM NIMs on DGX Spark devices, @@ -44,11 +44,11 @@ completions. ```bash nvidia-smi ``` -- Docker with NVIDIA Container Toolkit configured, instructions here: https://******.nvidia.com/dgx-docs/review/621/dgx-spark/latest/nvidia-container-runtime-for-docker.html +- Docker with NVIDIA Container Toolkit configured, instructions [here](https://******.nvidia.com/dgx-docs/review/621/dgx-spark/latest/nvidia-container-runtime-for-docker.html) ```bash docker run -it --gpus=all nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 nvidia-smi ``` -- NGC account with API key from https://ngc.nvidia.com/setup/api-key +- NGC account with API key from [here](https://ngc.nvidia.com/setup/api-key) ```bash echo $NGC_API_KEY | grep -E '^[a-zA-Z0-9]{86}==' ```