PyTorch is a well recognized Deep Learning framework that installs by default the newest CUDA but what if you want to Install PyTorch with CUDA 10.2? (The latest CUDA is 11.0 and PyTorch will soon follow up.) If you haven’t upgraded NVIDIA driver or can’t upgrade CUDA due to lack of root access, you will need to settle down with an old version like CUDA 10.2. This does mean that you can not use GPU in your PyTorch scripts by default. How can I solve them?
This guide assumes that you have installed CUDA 10.2, and that you can run python and a package manager such as pip or conda. Both Miniconda and Anaconda are good but Miniconda is light. We wrote an article about how to install Miniconda.
In A Nutshell
- First check if CUDA 10.2 is installed
- [For pip] Run
pip3 installwith specified version and
-f. Here we install the latest PyTorch 1.7.0.
pip3 install torch==1.7.0 torchvision==0.8.1
pipif you are using Python 2.
Note: PyTorch only supports CUDA 10.2 in 1.5.0, 1.5.1, 1.6.0, and 1.7.0 as of the updated date of this guide. (Search
For older version of PyTorch, you will need to install older versions of CUDA and install PyTorch there. See our guide on CUDA 10.0 and 10.1.
- [For conda] Run
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
- Check PyTorch is installed
x = torch.rand(3, 5)
- Verify if PyTorch is using CUDA 10.2
Verify PyTorch is installed
We’ll verify the installation by running a sample PyTorch script to ensure that PyTorch has been set up properly. Here we create a tensor that is randomly initialized.
import torch print(torch.rand(5, 3))
The output will be printed below. Yours would be somehow similar except for the numbers.
tensor([[0.5015, 0.4235, 0.8220], [0.7789, 0.9325, 0.2616], [0.3410, 0.6764, 0.6274], [0.4363, 0.5887, 0.6472], [0.5301, 0.0650, 0.5824]])
Verify that PyTorch has CUDA support
To check that PyTorch can access the GPU driver and CUDA, use the following Python code to decide whether the CUDA driver is enabled or not. It should return
import torch torch.cuda.is_available()
The following two sections introduce PyTorch and CUDA for the people who are interested.
What is PyTorch?
PyTorch is an open-source Deep Learning framework that is scalable and versatile for testing, reliable and supportive for deployment. It allows for quick, modular experimentation via an autograding component designed for fast and python-like execution. With the introduction of PyTorch 1.0, the framework now has graph-based execution, a hybrid front-end that allows for smooth mode switching, collaborative testing, and effective and secure deployment on mobile platforms.
PyTorch has 4 key features according to its homepage.
- PyTorch is production-ready: TorchScript smoothly toggles between eager and graph modes. TorchServe speeds up the production process.
- PyTorch support distributed training: The torch.collaborative interface allows for efficient distributed training and performance optimization in research and development.
- PyTorch has a robust ecosystem: It has an expansive ecosystem of tools and libraries to support applications such as computer vision and NLP.
- PyTorch has native cloud support: It is well recognized for its zero-friction development and fast scaling on key cloud providers.
What is CUDA?
CUDA is a general parallel computation architecture and programming model developed for NVIDIA graphical processing units (GPUs). Using CUDA, developers can significantly improve the speed of their computer programs by utilizing GPU resources.
In GPU-accelerated code, the sequential part of the task runs on the CPU for optimized single-threaded performance, the compute-intensive section, such as PyTorch code, runs on thousands of GPU cores in parallel through CUDA. Developers can code in common languages such as C, C++, Python while using CUDA, and implement parallelism via extensions in the form of a few simple keywords.
NVIDIA’s CUDA Toolkit includes everything you need to build GPU-accelerated software, including GPU-accelerated modules, a parser, programming resources, and the CUDA runtime.