PyTorch Python

How to Install PyTorch with CUDA 11.0

This tutorial will let you know how to install PyTorch with CUDA 11.0. Unfortunately, as of 8/9/2020, there is no binary release yet, so we will install PyTorch from source. Before you start, please check your CUDA version to make sure you have installed CUDA 11.0.

Step 0 — Install conda (Miniconda)

It is easier and hassle free to compile PyTorch using conda, rather than pip or bare metal system package managers such as apt.

We wrote a guide on how to install Miniconda here. Please proceed there before installing PyTorch with CUDA 11.0.

Compare to Anaconda, Miniconda is lightweight and will download needed packages on demand.

Step 1 — Install dependencies

First we will install all the dependencies of PyTorch.

This includes the fundamental scientific, numerical package numpy in Python. It will also install build tools such as setuptools and cmake.

conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests

Add LAPACK support for CUDA 11.0

Note that this step is only needed for Linux. If you use macOS on your Mac Book Pro, please ignore this step.

Then we need to install MAGMA, the CUDA 11.0 version (Hence magma-cuda110).

MAGMA provides implementations for CUDA, HIP, Intel Xeon Phi, and OpenCL. Here we are particularly interested in CUDA.

conda install -c pytorch magma-cuda110

Step 2 — Download PyTorch source for CUDA 11.0

First, run git clone to download the latest PyTorch source from GitHub. We use the parameter --recursive here just to download git submodules as well.

git clone --recursive

Then we cd into pytorch directory, which now becomes out working directory.

cd pytorch

Note: if you have ever had cloned the git repo of PyTorch, run the following command to update it.

git submodule sync
git submodule update --init --recursive

The commands above are also good if you want to get the latest PyTorch when you cloned its source after awhile.

Step 3 — Compile and Install PyTorch for CUDA 11.0

With the PyTorch source downloaded and CUDA 11.0 on your computer, now we will install PyTorch.

For Linux, such as Ubuntu 20.04 or 18.04, run

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python install

If you use macOS, run

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=11.0 CC=clang CXX=clang++ python install

[Optional] Change built options

Because we used cmake to compile PyTroch , you can optionally change the configuration of cmake variables.

The following code may be used to change the predetected directories for CuDNN or BLAS, for example.


export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python build --cmake-only
ccmake build  # or cmake-gui build


export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python build --cmake-only
ccmake build  # or cmake-gui build


To install PyTorch with CUDA 11.0, you will have to compile and install PyTorch from source, as of August 9th, 2020. There are a few steps: download conda, install PyTorch’s dependencies and CUDA 11.0 implementation using the Magma package, download PyTorch source from Github, and finally install it using cmake.


[Further Reading] What is PyTorch?

PyTorch is an open-source Deep Learning platform that is scalable and flexible for deployment testing, sturdy and supportive. It allows for fast, modular experimentation through an autograding component designed for quick and python-like execution.

PyTorch has 4 key features according to its homepage.

  1. PyTorch is ready for production: TorchScript switches seamlessly between the eager and graph modes. TorchServe accelerates production operation.
  2. PyTorch facilitates distributed training: The torch.collaborative interface enhances effective distributed training and maximizing efficiency in research and development.
  3. PyTorch has a robust ecosystem: It has an expansive tool and library ecosystem to support applications such as computer vision and NLP.
  4. PyTorch has native cloud support: It is well known on key cloud providers for its zero-friction development and quick scaling.

[Further Reading] What is CUDA?

CUDA is a general parallel programming and computation paradigm built for NVIDIA graphics processing units ( GPUs). With CUDA, developers can significantly increase the efficiency of their computer programs by using GPU tools.

The sequential portion of the function runs on the CPU in a GPU-accelerated program for optimized single-threaded performance, while the compute-intensive component, such as PyTorch code, runs parallel at thousands of GPU cores via CUDA. Developers can code in common languages such as C, C++ , Python by using CUDA, and implement parallelism in the form of a few basic keywords with extensions.

NVIDIA’s CUDA Toolkit includes everything you need to build accelerated GPU applications including GPU acceleration modules, a parser, programming tools, and CUDA runtime.


By VarHowto Editor

Welcome to VarHowto!