Categories
Python PyTorch

How to Install PyTorch with CUDA 10.0

PyTorch is a popular Deep Learning framework and installs with the latest CUDA by default. If you haven’t upgrade NVIDIA driver or you cannot upgrade CUDA because you don’t have root access, you may need to settle down with an outdated version like CUDA 10.0. However, that means you cannot use GPU in your PyTorch models by default. How can I fix it?

Prerequisite

This tutorial assumes you have CUDA 10.0 installed and you can run python and a package manager like pip or conda. Miniconda and Anaconda are both fine. We wrote an article on how to install Miniconda.

5 Steps to Install PyTorch With CUDA 10.0

  1. Check if CUDA 10.0 is installed

    cat /usr/local/cuda/version.txt

  2. [For pip] Run pip install with specified version and -f

    pip install torch==1.4.0 torchvision==0.5.0 -f https://download.pytorch.org/whl/cu100/torch_stable.html

    Note: PyTorch only supports CUDA 10.0 up to 1.4.0. (Search torch- in https://download.pytorch.org/whl/cu100/torch_stable.html).

  3. [For conda] Run conda install with cudatoolkit

    conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

  4. Verify PyTorch is installed

    Run Python with

    import torch
    x = torch.rand(5, 3)
    print(x)

  5. Verify PyTorch is using CUDA 10.0

    Run Python with

    import torch
    torch.cuda.is_available()

Verify PyTorch is installed

To insure that PyTorch has been set up properly, we will validate the installation by running a sample PyTorch script. Here we are going to create a randomly initialized tensor.

import torch
print(torch.rand(5, 3))

The following output will be printed. Yours will be similar.

tensor([[0.3380, 0.3845, 0.3217],
[0.8337, 0.9050, 0.2650],
[0.2979, 0.7141, 0.9069],
[0.1449, 0.1132, 0.1375],
[0.4675, 0.3947, 0.1426]])

Verify if CUDA is available to PyTorch

To test whether your GPU driver and CUDA are available and accessible by PyTorch, run the following Python code to determine whether or not the CUDA driver is enabled:

import torch
torch.cuda.is_available()

In case for people who are interested, the following 2 sections introduces PyTorch and CUDA.

What is PyTorch?

PyTorch is an open-source Deep Learning platform that is scalable and versatile for testing, reliable and supportive for deployment. It allows for quick, modular experimentation via an autograding component designed for fast and python-like execution. With the introduction of PyTorch 1.0, the framework now has graph-based execution, a hybrid front-end that allows for smooth mode switching, collaborative testing, and effective and secure deployment on mobile platforms.

PyTorch has 4 key features according to its homepage.

  1. PyTorch is production-ready: TorchScript smoothly toggles between eager and graph modes. TorchServe speeds up the production process.
  2. PyTorch support distributed training: The torch.collaborative interface allows for efficient distributed training and performance optimization in research and development.
  3. PyTorch has a robust ecosystem: It has an expansive ecosystem of tools and libraries to support applications such as computer vision and NLP.
  4. PyTorch has native cloud support: It is well recognized for its zero-friction development and fast scaling on key cloud providers.

What is CUDA?

CUDA is a general parallel computation architecture and programming model developed for NVIDIA graphical processing units (GPUs). Using CUDA, developers can significantly improve the speed of their computer programs by utilizing GPU resources.

In GPU-accelerated code, the sequential part of the task runs on the CPU for optimized single-threaded performance, the compute-intensive section, such as PyTorch code, runs on thousands of GPU cores in parallel through CUDA. Developers can code in common languages such as C, C++, Python while using CUDA, and implement parallelism via extensions in the form of a few simple keywords.

NVIDIA’s CUDA Toolkit includes everything you need to build GPU-accelerated software, including GPU-accelerated modules, a parser, programming resources, and the CUDA runtime.

You can learn more about CUDA in CUDA zone and download it here: https://developer.nvidia.com/cuda-downloads.


Reference: https://pytorch.org/get-started/locally/

+19

By VarHowto Editor

Welcome to VarHowto!

3 replies on “How to Install PyTorch with CUDA 10.0”

I ran the above command on windows but got an error:

ERROR: Could not find a version that satisfies the requirement torch==1.4.0+cu100 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.4.1, 1.0.0, 1.0.1, 1.1.0, 1.2.0, 1.2.0+cpu, 1.2.0+cu92, 1.3.0, 1.3.0+cpu, 1.3.0+cu92, 1.3.1, 1.3.1+cpu, 1.3.1+cu92, 1.4.0, 1.4.0+cpu, 1.4.0+cu92, 1.5.0, 1.5.0+cpu, 1.5.0+cu101, 1.5.0+cu92, 1.5.1, 1.5.1+cpu, 1.5.1+cu101, 1.5.1+cu92, 1.6.0, 1.6.0+cpu, 1.6.0+cu101, 1.7.0, 1.7.0+cpu, 1.7.0+cu101, 1.7.0+cu110, 1.7.1, 1.7.1+cpu, 1.7.1+cu101, 1.7.1+cu110, 1.8.0, 1.8.0+cpu, 1.8.0+cu101, 1.8.0+cu111, 1.8.1, 1.8.1+cpu, 1.8.1+cu101, 1.8.1+cu102, 1.8.1+cu111)
ERROR: No matching distribution found for torch==1.4.0+cu100

+1

The instructions yield the following error when installing torch using pip:

Could not find a version that satisfies the requirement torch==1.5.0+cu100 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 0.3.0.post4, 0.3.1, 0.4.0, 0.4.1, 1.0.0, 1.0.1, 1.0.1.post2, 1.1.0, 1.2.0, 1.2.0+cpu, 1.2.0+cu92, 1.3.0, 1.3.0+cpu, 1.3.0+cu100, 1.3.0+cu92, 1.3.1, 1.3.1+cpu, 1.3.1+cu100, 1.3.1+cu92, 1.4.0, 1.4.0+cpu, 1.4.0+cu100, 1.4.0+cu92, 1.5.0, 1.5.0+cpu, 1.5.0+cu101, 1.5.0+cu92)
No matching distribution found for torch==1.5.0+cu100

+3

Comments are closed.