| .. | ||
| assets | ||
| README.md | ||
Portfolio Optimization
GPU-Accelerated portfolio optimization using cuOpt and cuML
Table of Contents
Overview
Basic idea
This playbook demonstrates an end-to-end GPU-accelerated workflow using NVIDIA cuOpt and NVIDIA cuML to solve large-scale portfolio optimization problems, using the Mean-CVaR (Conditional Value-at-Risk) model, in near real-time.
Portfolio Optimization (PO) involves solving high-dimensional, non-linear numerical optimization problems to balance risk and return. Modern portfolios often contain thousands of assets, making traditional CPU-based solvers too slow for advanced workflows. By moving the computational heavy lifting to the GPU, this solution dramatically reduces computation time.
What you'll accomplish
You will implement a pipeline that provides tools for performance evaluation, strategy backtesting, benchmarking, and visualization. The workflow includes:
- GPU-Accelerated Optimization: Leveraging NVIDIA cuOpt LP/MILP solvers
- Data-Driven Risk Modeling: Implementing CVaR as a scenario-based risk measure that models tail risks without making assumptions about asset return distributions.
- Scenario Generation: Using GPU-accelerated Kernel Density Estimation (KDE) via NVIDIA cuML to model return distributions.
- Real-World Constraint Management: Implementing constraints including concentration limits, leverage constraints, turnover limits, and cardinality constraints.
- Comprehensive Backtesting: Evaluating portfolio performance with specific tools for testing rebalancing strategies.
What to know before starting
-
Required Skills (you'll get it):
- Basic with Terminal and Linux command line
- Basic understanding of Docker containers
- Basic knowledge of using Jupyter Notebooks and Jupyter Lab
- Basic Python knowledge
- Basic knowledge of data science and machine learning concepts
- Basic knowledge of what the stock market and stocks are
-
Optional Skills (you'll enjoy it):
- Background in Financial Services, especially in quantatitve finance and portfolio management
- Moderate knowledge programming algorithms and strategies, in python, using machine learning concepts
-
Terms to know:
- CVaR vs. Mean-Variance: Unlike traditional mean-variance models, this workflow uses Conditional Value-at-Risk (CVaR) to capture nuances of risk, specifically tail risk or scenario-specific stresses.
- Linear Programming: CVaR reformulates the risk-return tradeoff as a scenario-based linear program where the problem size scales with the number of scenarios, which is why GPU acceleration is critical.
- Benchmarking: The pipeline includes built-in tools to streamline the benchmarking process against standard CPU-based libraries to validate performance gains.
Prerequisites
Hardware Requirements:
- NVIDIA Grace Blackwell GB10 Superchip System (DGX Spark)
- Minimum 40GB Unified memory free for docker container and GPU accelerated data processing
- At least 30GB available storage space for docker container and data files
- High speed internet connection recommended
Software Requirements:
- NVIDIA DGX OS with working NVIDIA and CUDA drivers
- Docker
- Git
Ancillary files
All required assets can be found in the Portfolio Optimization repository. In the running playbook, they will all be found under the playbook folder.
cvar_basic.ipynb- Main playbook notebook./setup/README.md- Quick Start Guide to the Playbook Environment./setup/start_playbook.sh- Script to start the install of the playbook in a Docker container/setup/setup_playbook.sh- Configures the Docker container before user enters jupyterlab environment/setup/pyproject.toml- used as a lists of libraries that commands in setup_playbook will install into the playbook environmentcuDF, cuML, and cuGraph folders- more example notebooks to continue your GPU Accelerated Data Science Journey. These will be part of the Docker Container when you start it.
Time & risk
- Estimated Time ~20 minutes for first run
- Total Notebook Processing Time: Approximately 7 minutes for the full pipeline.
- Risks:
- Minimal, as this is run in a Docker container.
-
Rollback: Stop the Docker container and remove the cloned repository to fully remove the installation.
-
Last Updated: 01/21/2026
- Update
git clonecommand with the correct project path.
- Update
Instructions
Step 1. Verify your environment
Let's first verify that you have a working GPU, git, and Docker. Open up Terminal, then copy and paste in the below commands:
nvidia-smi
git --version
docker --version
nvidia-smiwill output information about your GPU. If it doesn't, your GPU is not properly configured.git --versionwill print something likegit version 2.43.0. If you get an error saying that git is not installed, please reinstall it.docker --versionwill print something likeDocker version 28.3.3, build 980b856. If you get an error saying that Docker is not installed, please reinstall it.
Step 2. Installation
Open up Terminal, then copy and paste in the below commands:
git clone https://github.com/NVIDIA/dgx-spark-playbooks
cd dgx-spark-playbooks/nvidia/portfolio-optimization/assets
bash ./setup/start_playbook.sh
start_playbook.sh will:
- pull the RAPIDS 25.10 Notebooks Docker container
- build all the environments needed for the playbook in the container using
setup_playbook.sh - start Jupyterlab
Please keep the Terminal window open while using the playbook.
You can access your Jupyterlab server in three ways
- at
http://127.0.0.1:8888if running locally on the DGX Spark. - at
http://<SPARK_IP>:8888if using your DGX Spark headless over your network. - by creating an SSH tunnel using
ssh -L 8888:localhost:8888 username@spark-IPin Terminal and the going tohttp://127.0.0.1:8888in your browser on your host machine
Once in Jupyterlab, you'll be greeted with a directory containing cvar_basic.ipynb, and the folders cudf, cuml and cugraph.
cvar_basic.ipynbis the playbook notebook. You will want to open this by double clicking on the file.cudf,cuml,cugraphfolders contain the standard RAPIDS library example notebooks to help you continue exploring.playbookcontains the playbook files. The contents of this folder are read-only inside of a rootless Docker Container.
If you want to install any of the playbook notebooks on your own system, check out the readmes within the folder that accompanies the notebook
Step 3. Run the notebook
Once in jupyterlab, you have to do is run the cvar_basic.ipynb.
Before your start running the cells in the notebook, please change the kernel to "Portfolio Optimization" as per the instructions in the notebook. Failure to do so will cause errors by the second code cell. If you started already, you will have to set it to the correct kernel, then restart the kernel, and try again.
You can use Shift + Enter to manually run each cell at your own pace, or Run > Run All to run all the cells.
Once you're done with exploring the cvar_basic notebook, you can explore other RAPIDS notebooks by going into the folders, selecting other notebooks, and doing the same thing.
Step 4. Download your work
Since the docker container is not priviledged and cannot write back to the host system, you can use Jupyterlab to download any files you may want to keep once the docker container is shut down.
Simply right click the file you want, in the browser, and click Download in the drop down.
Step 5. Cleanup
Once you have downloaded all your work, Go back to the Terminal window where you started running the playbook.
In the Terminal window:
- Type
Ctrl + C - Quickly either enter
yand then hitEnterat the prompt or hitCtrl + Cagain - The Docker container will proceed to shut down
Warning
This will delete ALL data that wasn't already downloaded from the Docker container. The browser window may still show cached files if it is still open.
Step 6. Next Steps
Once you're comfortable with this foundational workflow, please explore these advanced portfolio optimization topics in any order at the NVIDIA AI Blueprints:
-
efficient_frontier.ipynb- Efficient Frontier AnalysisThis notebook demonstrates how to:
- Generate the efficient frontier by solving multiple optimization problems
- Visualize the risk-return tradeoff across different portfolio configurations
- Compare portfolios along the efficient frontier
- Leverage GPU acceleration to quickly compute multiple optimal portfolios
-
rebalancing_strategies.ipynb- Dynamic Portfolio RebalancingThis notebook introduces dynamic portfolio management techniques:
- Time-series backtesting framework
- Testing various rebalancing strategies (periodic, threshold-based, etc.)
- Evaluating the impact of transaction costs on portfolio performance
- Analyzing strategy performance over different market conditions
- Comparing multiple rebalancing approaches
-
If you'd further learn how to formulate portfolio optimization problems using similar risk–return frameworks, check out the DLI course: Accelerating Portfolio Optimization
Step 7. Further Support
For questions or issues, please visit:
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
| Docker is not found. | Docker may have been uninstalled, as it is preinstalled on your DGX Spark | Please install Docker using their convenience script here: curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh. You will be prompted for your password. |
| Docker command unexpectedly exits with "permissions" error | Your user is not part of the docker group |
Open Terminal and run these commands: sudo groupadd docker $$ sudo usermod -aG docker $USER. You will be prompted for your password. Then, close the Terminal, open a new one, and try again |
| Docker container download, environment build, or data download fails | There was either a connectivity issue or a resource may be temporariliy unavailable. | You may need to try again later. If this persist, please reach out to us! |
Note
DGX Spark uses a Unified Memory Architecture (UMA), which enables dynamic memory sharing between the GPU and CPU. With many applications still updating to take advantage of UMA, you may encounter memory issues even when within the memory capacity of DGX Spark. If that happens, manually flush the buffer cache with:
sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
For latest known issues, please review the DGX Spark User Guide.