mirror of
https://github.com/NVIDIA/dgx-spark-playbooks.git
synced 2026-04-22 18:13:52 +00:00
chore: Regenerate all playbooks
This commit is contained in:
parent
6c8de25d49
commit
0c5028360a
@ -11,7 +11,7 @@
|
||||
|
||||
## Overview
|
||||
|
||||
## Basic Idea
|
||||
## Basic idea
|
||||
|
||||
This playbook demonstrates how to fine-tune Vision-Language Models (VLMs) for both image and video understanding tasks on DGX Spark.
|
||||
With 128GB of unified memory and powerful GPU acceleration, DGX Spark provides an ideal environment for training VRAM-intensive multimodal models that can understand and reason about visual content.
|
||||
@ -106,11 +106,11 @@ sh launch.sh
|
||||
## Enter the mounted directory within the container
|
||||
cd /vlm_finetuning
|
||||
```
|
||||
**Note**: The same Docker container and launch commands work for both image and video VLM recipes. The container includes all necessary dependencies, including FFmpeg, Decord, and optimized libraries for both workflows.
|
||||
**Note**: The same Docker container and launch commands work for both image and video VLM recipes. The container features all necessary dependencies, including FFmpeg, Decord, and optimized libraries for both workflows.
|
||||
|
||||
## Step 5. [Option A] For image VLM fine-tuning (Wildfire Detection)
|
||||
|
||||
#### 5.1. Model Download
|
||||
#### 5.1. Model download
|
||||
|
||||
```bash
|
||||
hf download Qwen/Qwen2.5-VL-7B-Instruct
|
||||
@ -120,11 +120,11 @@ If you already have a fine-tuned checkpoint, place it in the `saved_model/` fold
|
||||
|
||||
#### 5.2. Download the wildfire dataset from Kaggle and place it in the `data` directory
|
||||
|
||||
The wildfire dataset can be found here: https://www.kaggle.com/datasets/abdelghaniaaba/wildfire-prediction-dataset
|
||||
The wildfire dataset can be found here: https://www.kaggle.com/datasets/abdelghaniaaba/wildfire-prediction-dataset.
|
||||
|
||||
#### 5.3. Base Model Inference
|
||||
#### 5.3. Base model inference
|
||||
|
||||
Before we start finetuning, let's spin up the demo UI to evaluate the base model's performance on this task.
|
||||
Before we start fine-tuning, let's spin up the demo UI to evaluate the base model's performance on this task.
|
||||
|
||||
```bash
|
||||
streamlit run Image_VLM.py
|
||||
@ -132,31 +132,32 @@ streamlit run Image_VLM.py
|
||||
|
||||
Access the streamlit demo at http://localhost:8501/.
|
||||
|
||||
When you access the streamlit demo for the first time, the backend triggers vLLM servers to spin up for the base model. You will see a spinner on the demo site as vLLM is being brought up for optimized inference. This step can take upto 15 mins.
|
||||
When you access the streamlit demo for the first time, the backend triggers vLLM servers to spin up for the base model. You will see a spinner on the demo site as vLLM is being brought up for optimized inference. This step can take up to 15 mins.
|
||||
|
||||
#### 5.4. GRPO Finetuning
|
||||
#### 5.4. GRPO fine-tuning
|
||||
|
||||
We will perform GRPO finetuning to add reasoning capabilities to our base model and improve the model's understanding of the underlying domain. Considering that you have already spun up the streamlit demo, scroll to the `GRPO Training section`.
|
||||
We will perform GRPO fine-tuning to add reasoning capabilities to our base model and improve the model's understanding of the underlying domain. Considering that you have already spun up the streamlit demo, scroll to the `GRPO Training section`.
|
||||
|
||||
After configuring all the parameters, hit `Start Finetuning` to begin the training process. You will need to wait about 15 minutes for the model to load and start recording metadata on the UI. As the training progresses, information such as the loss, step, and GRPO rewards will be recorded on a live table.
|
||||
|
||||
The default loaded configuration should give you reasonable accuracy, taking 100 steps of training over a period of upto 2 hours. We achieved our best accuracy with around 1000 steps of training, taking close to 16 hours.
|
||||
The default loaded configuration should give you reasonable accuracy, taking 100 steps of training over a period of up to 2 hours. We achieved our best accuracy with around 1000 steps of training, taking close to 16 hours.
|
||||
|
||||
After training is complete, the script automatically merges LoRA weights into the base model. After the training process has reached the desired number of training steps, it can take 5 mins to merge the LoRA weights.
|
||||
|
||||
Once you stop training, the UI will automatically bring up the vLLM servers for the base model and the newly finetuned model.
|
||||
Once you stop training, the UI will automatically bring up the vLLM servers for the base model and the newly fine-tuned model.
|
||||
|
||||
#### 5.5. Finetuned Model Inference
|
||||
#### 5.5. Fine-tuned model inference
|
||||
|
||||
Now we are ready to perform a comparative analysis between the base model and the finetuned model.
|
||||
Now we are ready to perform a comparative analysis between the base model and the fine-tuned model.
|
||||
|
||||
Regardless of whether you just spun up the demo or just stopped training, please wait about 15 minutes for the vLLM servers to be brought up.
|
||||
|
||||
Scroll down to the `Image Inference` section, and enter your prompt in the provided chat box. Upon clicking `Generate`, your prompt would be first sent to the base model and then to the finetuned model. You can use the following prompt to quickly test inference
|
||||
Scroll down to the `Image Inference` section and enter your prompt in the provided chat box.
|
||||
Upon clicking `Generate` your prompt will be first sent to the base model and then to the fine-tuned model. You can use the following prompt to quickly test inference:
|
||||
|
||||
`Identify if this region has been affected by a wildfire`
|
||||
|
||||
If you trained your model sufficiently, you should see that the finetuned model is able to perform reasoning and provide a concise, accurate answer to the prompt. The reasoning steps are provided in the markdown format, while the final answer is bolded and provided at the end of the model's response.
|
||||
If you trained your model sufficiently, you should see that the fine-tuned model is able to perform reasoning and provide a concise, accurate answer to the prompt. The reasoning steps are provided in the markdown format, while the final answer is bolded and provided at the end of the model's response.
|
||||
|
||||
## Step 6. [Option B] For video VLM fine-tuning (Driver Behaviour Analysis)
|
||||
|
||||
@ -172,7 +173,7 @@ dataset/
|
||||
└── metadata.jsonl
|
||||
```
|
||||
|
||||
#### 6.2. Model Download
|
||||
#### 6.2. Model download
|
||||
|
||||
> **Note**: These instructions assume you are already inside the Docker container. For container setup, refer to the main project README at `vlm-finetuning/assets/README.md`.
|
||||
|
||||
@ -180,7 +181,7 @@ dataset/
|
||||
hf download OpenGVLab/InternVL3-8B
|
||||
```
|
||||
|
||||
#### 6.4. Base Model Inference
|
||||
#### 6.3. Base model inference
|
||||
|
||||
Before going ahead to finetune our video VLM for this task, let's see how the base InternVL3-8B does.
|
||||
|
||||
@ -191,18 +192,18 @@ streamlit run Video_VLM.py
|
||||
|
||||
Access the streamlit demo at http://localhost:8501/.
|
||||
|
||||
When you access the streamlit demo for the first time, the backend triggers Huggingface to spin up the base model. You will see a spinner on the demo site as the model is being loaded, which can take upto 10 minutes.
|
||||
When you access the streamlit demo for the first time, the backend triggers Huggingface to spin up the base model. You will see a spinner on the demo site as the model is being loaded, which can take up to 10 minutes.
|
||||
|
||||
First, let's select a video from our dashcam gallery. Upon clicking the green file open icon near a video, you should see the video render and play automatically for our reference.
|
||||
|
||||
If you are proceeding to train a finetuned model, ensure that the streamlit demo UI is brought down before proceeding to train. You can bring it up by interrupting the terminal with `Ctrl+C` keystroke.
|
||||
If you are proceeding to train a fine-tuned model, ensure that the streamlit demo UI is brought down before proceeding to train. You can bring it up by interrupting the terminal with `Ctrl+C` keystroke.
|
||||
|
||||
> **Note**: To clear out any extra occupied memory from your system, execute the following command outside the container after interrupting the ComfyUI server.
|
||||
```bash
|
||||
sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
|
||||
```
|
||||
|
||||
#### 6.5. Run the training notebook
|
||||
#### 6.4. Run the training notebook
|
||||
|
||||
```bash
|
||||
## Enter the correct directory
|
||||
@ -222,9 +223,9 @@ After training, ensure that you shutdown the jupyter kernel in the notebook and
|
||||
```bash
|
||||
sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
|
||||
```
|
||||
#### 6.6. Finetuned Model Inference
|
||||
#### 6.5. Fine-tuned model inference
|
||||
|
||||
Now we are ready to perform a comparative analysis between the base model and the finetuned model.
|
||||
Now we are ready to perform a comparative analysis between the base model and the fine-tuned model.
|
||||
|
||||
If you haven't spun up the streamlit demo already, execute the following command. If you have just stopped training and are still within the live UI, skip to the next step.
|
||||
|
||||
@ -234,6 +235,6 @@ streamlit run Video_VLM.py
|
||||
|
||||
Access the streamlit demo at http://localhost:8501/.
|
||||
|
||||
If you trained your model sufficiently, you should see that the finetuned model is able to identify the salient events from the video and generate a structured output.
|
||||
If you trained your model sufficiently, you should see that the fine-tuned model is able to identify the salient events from the video and generate a structured output.
|
||||
|
||||
Feel free to play around with additional videos available in the gallery.
|
||||
|
||||
Loading…
Reference in New Issue
Block a user