mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-22 18:13:52 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
3fb2a79250
commit
7e6f9b332e
@ -30,7 +30,7 @@ Recipes are specifically for DIGITS SPARK. Please make sure that OS and drivers
|
||||
|
||||
## Ancillary files
|
||||
|
||||
ALl files required for fine-tuning are included.
|
||||
ALl files required for fine-tuning are included in the folder in [the GitHub repository here](https://gitlab.com/nvidia/dgx-spark/temp-external-playbook-assets/dgx-spark-playbook-assets/-/blob/main/${MODEL}).
|
||||
|
||||
## Time & risk
|
||||
|
||||
@ -92,7 +92,7 @@ huggingface-cli login
|
||||
##<Enter n for git credential>
|
||||
```
|
||||
|
||||
## Step6: Clone the git repo with fine-tuning recipes
|
||||
## Step 6: Clone the git repo with fine-tuning recipes
|
||||
|
||||
```bash
|
||||
git clone https://gitlab.com/nvidia/dgx-spark/temp-external-playbook-assets/dgx-spark-playbook-assets/-/blob/main/${MODEL}
|
||||
@ -106,7 +106,7 @@ To run LoRA on Llama3-8B use the following command:
|
||||
python Llama3_8B_LoRA_finetuning.py
|
||||
```
|
||||
|
||||
To run qLoRA fine-uning on llama3-70B use the following command:
|
||||
To run qLoRA fine-tuning on llama3-70B use the following command:
|
||||
```bash
|
||||
python Llama3_70B_qLoRA_finetuning.py
|
||||
```
|
||||
|
||||
Loading…
Reference in New Issue
Block a user