Categories
PyTorch Python

How to Install PyTorch with CUDA 9.0

PyTorch is a very popular Deep Learning framework which by default supports the latest CUDA, but what if you want to use PyTorch with CUDA 9.0? If you haven’t updated NVIDIA driver or can not upgrade CUDA due to lack of root access, an old version like CUDA 9.0 will cause you to settle down. This means that by default the PyTorch scripts can not be used for GPU. How can I run PyTorch with CUDA 9.0?

Prerequisite

This guide assumes you have CUDA 9.0 available, and you can run python and a package manager like pip or conda. Anaconda and Miniconda are equally perfect but Miniconda is lightweight. We wrote an article about how to install Miniconda.

In A Nutshell

  1. Check if CUDA 9.0 is installed

    cat /usr/local/cuda/version.txt

  2. [For pip] Run pip install with specified version and -f

    pip install torch==1.1.0 torchvision==0.3.0 -f https://download.pytorch.org/whl/cu90/torch_stable.html

    Note: PyTorch only supports CUDA 9.0 up to 1.1.0. (Search torch- in https://download.pytorch.org/whl/cu90/torch_stable.html).

    You can also install PyTorch 1.0.1, 1.0.0, 0.4.1, 0.4.0., 0.3.1, 0.3.0, but not 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0.

  3. [For conda] Run conda install with cudatoolkit (9.0)

    conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

  4. Check whether PyTorch is installed

    Open Python and test the following code

    import torch
    = torch.rand(5, 3)
    print(x)

  5. Verify if CUDA 9.0 is available in PyTorch

    Run Python with

    import torch
    torch.cuda.is_available()

Verify PyTorch is installed

We can verify the PyTorch CUDA 9.0 installation by running a sample Python script to ensure that PyTorch is set up properly. Here we create a tensor, which is randomly initialized.

import torch
print(torch.rand(5, 3))

The output is printed below. Yours will be similar in some way, except for the numbers.

tensor([[0.5055, 0.4535, 0.8250],
[0.7789, 0.9305, 0.2666],
[0.3510, 0.6764, 0.6274],
[0.4363, 0.5887, 0.6882],
[0.5301, 0.0680, 0.5824]])

Verify that PyTorch has CUDA support

To check if PyTorch can use both the GPU driver and CUDA 9.0, use the Python code below to decide if CUDA 9.0 is enabled or not. It will return True.

import torch
torch.cuda.is_available()

The following two sections refer the people interested to PyTorch and CUDA.

What is PyTorch?

PyTorch is an open-source Deep Learning framework that is scalable and versatile for deployment testing, sturdy and friendly. It allows for easy, flexible experimentation via an autograding framework designed for simple and python-like execution. With the release of PyTorch 1.0 the framework now has graph-based execution amd a hybrid front-end that allows for seamless switching of configuration, collective testing, and effective and secure delivery on mobile devices.

PyTorch has 4 key features according to its homepage.

  1. PyTorch is production-ready: TorchScript smoothly toggles between eager and graph modes. TorchServe speeds up the production process.
  2. PyTorch support distributed training: The torch.collaborative interface allows for efficient distributed training and performance optimization in research and development.
  3. PyTorch has a robust ecosystem: It has an expansive ecosystem of tools and libraries to support applications such as computer vision and NLP.
  4. PyTorch has native cloud support: It is well recognized for its zero-friction development and fast scaling on key cloud providers.

What is CUDA?

CUDA is a general parallel programming and computation paradigm built for NVIDIA graphics processing units ( GPUs). With CUDA, developers can dramatically increase the performance of their computer programs by utilizing GPU resources.

The sequential portion of a function runs on the CPU in a GPU-accelerated program for optimized single-threaded performance, while the compute-intensive part, such as PyTorch code, runs parallel at thousands of GPU cores via CUDA. Developers can code in popular languages such as C, C++ , Python when using CUDA, and enforce parallelism in the form of a few simple keywords with extensions.

NVIDIA’s CUDA Toolkit includes everything you need to build accelerated GPU applications including GPU acceleration modules, a parser, programming tools, and CUDA runtime.

You can learn more about CUDA in CUDA zone and download it here: https://developer.nvidia.com/cuda-downloads.

Reference: https://pytorch.org/get-started/locally/

+2

By VarHowto Editor

Welcome to VarHowto!