Here you will learn how to check CUDA version for TensorFlow. The 3 methods are CUDA toolkit’s nvcc
, NVIDIA driver’s nvidia-smi
, and simply checking a file.
Contents
Prerequisite
Before we begin, you should have installed NVIDIA driver on your system as well as Nvidia CUDA toolkit. We also assume you have TensorFlow installed.
To check if TensorFlow is using GPU and how many GPUs are available in your system, run
import tensorflow as tf
print("# GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
You should be able to see something similar:
# GPUs Available: 1
Method 1 — Use nvcc
to check CUDA version for TensorFlow
If you have installed the cuda-toolkit
package either from Ubuntu’s or NVIDIA’s official Ubuntu repository through sudo apt install nvidia-cuda-toolkit
, or by downloading from NVIDIA’s official website and install it manually, you will have nvcc
in your path ($PATH
) and its location would be /usr/bin/nvcc
(by running which nvcc
).
To check the CUDA version with nvcc
for TensorFlow, execute
nvcc --version
You can see similar output in the screenshot below. The last line shows your version. The version here is 10.1. Yours can vary, and may be either 10.0 or 10.2. After the screenshot you can find the full text output too.
vh@varhowto-com:~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
What is nvcc?
nvcc
is the NVIDIA CUDA Compiler, thus the name. It is the main wrapper for the CUDA compiler suite. For nvcc
‘s other usage, you can use it to compile and link both host and GPU code.
Check out the manpage of nvcc
for more information.
Method 2 — Use nvidia-smi
from Nvidia Linux driver
The second way to check CUDA version for TensorFlow is to run nvidia-smi
that comes from your NVIDIA driver installation, specifically the NVIDIA-utils
package. You can either install Nvidia driver from Ubuntu’s official repository or NVIDIA website.
$ which nvidia-smi /usr/bin/nvidia-smi
To use nvidia-smi
to check CUDA version, directly run
nvidia-smi
You will see output similar to the following screenshot. The CUDA version information is on the top right of the output. Here my version is 10.2. Again, yours might vary if you installed 10.0, 10.1 or even have the older 9.0.
Interestingly, you can also find more detail from nvidia-smi, except for the CUDA version, such as driver version (440.100), GPU name, GPU fan ratio, power consumption / capability, memory use. You can also find the processes which use the GPU at present. This is helpful if you want to see whether GPU is included in your TensorFlow model or software.
Here is the full text output:
Mon Aug 10 23:22:16 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 33% 47C P0 29W / 151W | 1914MiB / 8116MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2032 G /usr/lib/xorg/Xorg 73MiB |
| 0 2156 G /usr/bin/gnome-shell 179MiB |
| 0 4259 G /usr/lib/xorg/Xorg 951MiB |
| 0 4376 G /usr/bin/gnome-shell 268MiB |
| 0 7919 G …AAAAAAAAAAAACAAAAAAAAAA= --shared-files 146MiB |
| 0 10277 G …AAAAAAAAAAAACAAAAAAAAAA= --shared-files 290MiB |
+-----------------------------------------------------------------------------+
What is nvidia-smi?
nvidia-smi
(NVSMI) is NVIDIA System Management Interface program. It is also known as NVSMI. nvidia-smi
provides tracking and maintenance features for all of the Tesla, Quadro, GRID and GeForce NVIDIA GPUs and higher architectural families in Fermi. For most functions, GeForce Titan Series products are supported with only a limited amount of detail provided for the rest of the Geforce range.
NVSMI is also a cross-platform program that supports all popular Linux distros supported by the NVIDIA driver and 64-bit Windows versions beginning with Windows Server 2008 R2. Metrics may be used by users directly through stdout, or stored for scripting purposes via CSV and XML formats.
For more information, check out nvidia-smi
‘s manpage.
Method 3 — cat /usr/local/cuda/version.txt
cat /usr/local/cuda/version.txt
Note that this method might not work on Ubuntu 18.04 if you install Nvidia driver and CUDA from Ubuntu 18.04’s own official repository.
What is TensorFlow
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
TensorFlow has 3 major features:
- Easy model building. Create and train ML models with eager execution, using intuitive high-level APIs such as Keras. The eager execution makes for immediate model iteration and makes it easy to debug.
- Robust ML production anywhere. Models can be quickly trained and distributed in the cloud, on-prem, on the web, or on-device irrespective of the language you use.
- Powerful experimentation for research. A easy and scalable framework for bringing new ideas faster from concept to code, to state-of-the-art models and to publish.
3 ways to check CUDA version for TensorFlow
Time Needed : 5 minutes
There are three ways to identify the CUDA version, which isn't only for TensorFlow.
- The best way is possibly to test a file
Run
cat /usr/local/cuda/version.txt
Note: this may not work on Ubuntu 18.04 - Another solution is through the cuda-toolkit command nvcc.
nvcc –version
- The other way is by the NVIDIA driver's nvidia-smi command you may have installed.
Simply run
nvidia-smi
Tools
- nvcc
- nvidia-smi
Materials
- Ubuntu
- PyTorch