Categories
Python PyTorch

How to Install PyTorch with CUDA 9.1

PyTorch is an extremely popular Deep Learning framework that supports the latest CUDA by default, but what if you want to use PyTorch with CUDA 9.1? If you have not upgraded NVIDIA driver or are unable to upgrade CUDA due to lack of root access, an older version such as CUDA 9.1 will cause you to settle down. This means that the PyTorch scripts can not be used with GPU by default. How to run PyTorch on CUDA 9.1

Prerequisite

This guide assumes that you have CUDA 9.1 available, and that you can run python and a package manager such as pip or conda. Equally perfect are Anaconda and Miniconda but lightweight is Miniconda. We wrote an article about how to install Miniconda.

In A Nutshell

  1. Check if CUDA 9.1 has been installed

    cat /usr/local/cuda/version.txt

    There are also other ways to check CUDA version.

  2. [For pip] Run pip install with specified version and -f

    pip install torch==1.1.0 torchvision==0.3.0 -f https://download.pytorch.org/whl/cu90/torch_stable.html

    Here we install the PyThon binary for CUDA 9.0, because PyTorch does not officially support (i.e., skipped) CUDA 9.1. But it should be compatible.

    Note: PyTorch only supports CUDA 9.0 up to 1.1.0. (Search torch- in https://download.pytorch.org/whl/cu90/torch_stable.html).

    You can also install PyTorch 1.0.1, 1.0.0, 0.4.1, 0.4.0., 0.3.1, 0.3.0, but not 1.2.0, 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.6.0 for CUDA 9.1.

  3. [For conda] Run conda install with cudatoolkit

    conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

    As stated above, PyTorch binary for CUDA 9.0 should be compatible with CUDA 9.1.

  4. Check if PyTorch has been installed

    Open Python and run the following:

    import torch
    = torch.rand(5, 3)
    print(x)

  5. Verify if CUDA 9.1 is available in PyTorch

    Run Python with

    import torch
    torch.cuda.is_available()

Verify PyTorch is installed

To ensure that PyTorch is set up properly, we can verify the installation by running a sample PyTorch script. Here we create a tensor, which is initialized at random.

import torch
print(torch.rand(5, 3))

The output is shown below. In some ways yours will be similar, except for the numbers.

tensor([[0.6066, 0.4646, 0.8260],
[0.8889, 0.9406, 0.2666],
[0.4620, 0.6864, 0.6284],
[0.4464, 0.6888, 0.6882],
[0.6402, 0.0680, 0.6824]])

Verify that PyTorch has CUDA support

To check whether PyTorch can use both GPU driver and CUDA, use the Python code below to determine whether or not the CUDA driver is enabled. It’ll return True.

import torch
torch.cuda.is_available()

The following two sections refer the people interested to PyTorch and CUDA.

What is PyTorch?

PyTorch is an open-source Deep Learning platform that is modular, flexible and robust and convenient for deployment testing. It facilitates fast , scalable exploration through an autograding platform optimized for quick, python-like execution. With the introduction of PyTorch 1.0, the system now has a graph-based execution and a hybrid front end enabling configuration to be swapped smoothly, and effective and secure delivery on mobile devices.

PyTorch has 4 key features according to its homepage.

  1. PyTorch is production-ready: TorchScript smoothly toggles between eager and graph modes. TorchServe speeds up the production process.
  2. PyTorch support distributed training: The torch.collaborative interface allows for efficient distributed training and performance optimization in research and development.
  3. PyTorch has a robust ecosystem: It has an expansive ecosystem of tools and libraries to support applications such as computer vision and NLP.
  4. PyTorch has native cloud support: It is well recognized for its zero-friction development and fast scaling on key cloud providers.

What is CUDA?

CUDA is a general parallel computing and programming paradigm created for NVIDIA graphics processing units ( GPUs). With CUDA, through the use of GPU resources, developers can significantly improve the efficiency of their computer programmes.

The sequential portion of a function runs on the CPU in a GPU-accelerated program for optimized single-threaded performance, while the compute-intensive part, such as PyTorch code, runs parallel at thousands of GPU cores via CUDA. Developers can code in popular languages such as C, C++ , Python when using CUDA, and enforce parallelism in the form of a few simple keywords with extensions.

NVIDIA’s CUDA Toolkit provides everything you need to build accelerated GPU applications including GPU acceleration modules, a parser, programming tools and CUDA runtime.

You can learn more about CUDA in CUDA zone and download it here: https://developer.nvidia.com/cuda-downloads.

Reference: https://pytorch.org/get-started/locally/

2+

By VarHowto Editor

Welcome to VarHowto!

Leave a Reply

Your email address will not be published. Required fields are marked *