From 7e6f9b332e529a918e0161d009503c248d9cb7e6 Mon Sep 17 00:00:00 2001 From: GitLab CI Date: Tue, 7 Oct 2025 19:03:50 +0000 Subject: [PATCH] chore: Regenerate all playbooks --- nvidia/pytorch-fine-tune/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/nvidia/pytorch-fine-tune/README.md b/nvidia/pytorch-fine-tune/README.md index e79d6fe..5ec66cf 100644 --- a/nvidia/pytorch-fine-tune/README.md +++ b/nvidia/pytorch-fine-tune/README.md @@ -30,7 +30,7 @@ Recipes are specifically for DIGITS SPARK. Please make sure that OS and drivers ## Ancillary files -ALl files required for fine-tuning are included. +ALl files required for fine-tuning are included in the folder in [the GitHub repository here](https://gitlab.com/nvidia/dgx-spark/temp-external-playbook-assets/dgx-spark-playbook-assets/-/blob/main/${MODEL}). ## Time & risk @@ -92,7 +92,7 @@ huggingface-cli login ## ``` -## Step6: Clone the git repo with fine-tuning recipes +## Step 6: Clone the git repo with fine-tuning recipes ```bash git clone https://gitlab.com/nvidia/dgx-spark/temp-external-playbook-assets/dgx-spark-playbook-assets/-/blob/main/${MODEL} @@ -106,7 +106,7 @@ To run LoRA on Llama3-8B use the following command: python Llama3_8B_LoRA_finetuning.py ``` -To run qLoRA fine-uning on llama3-70B use the following command: +To run qLoRA fine-tuning on llama3-70B use the following command: ```bash python Llama3_70B_qLoRA_finetuning.py ```