How to check if GPU is really used?
Introduction
If your environment is correctly configured and you're using Tensorflow as the backend, you don't need any special configuration in your Python program to use GPU.
If Keras detects any available GPU, it will use it.
But, in some cases, you maybe want to check that you're indeed using GPUs.
With this command, you can see if GPUs are enabled and GPUs utilization
nvidia-smi
and this is an example of the output
nvidia-smi
Sun Mar 24 10:14:11 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 415.25 Driver Version: 415.25 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... Off | 00000000:5E:00.0 Off | 0 |
| N/A 41C P0 41W / 300W | 15517MiB / 16280MiB | 9% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... Off | 00000000:86:00.0 Off | 0 |
| N/A 37C P0 40W / 300W | 357MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 3258 C /home/ubuntu/anaconda3/bin/python 15507MiB |
| 1 3258 C /home/ubuntu/anaconda3/bin/python 347MiB |
+-----------------------------------------------------------------------------+
In the example, you can see that one GPU is used, with an average utilization of 9%. It depends on your network and what kind of operation you're doing.
GPU usage in a Convolutional Network
A Convolutional Network is by far more complex than the one we have used for the above example.
In this case, we expect a higher usage of the GPU.
If we take, for example, the Notebook from the Chollet book: https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.1-introduction-to-convnets.ipynb
nvidia-smi
Sun Mar 24 17:25:15 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 415.25 Driver Version: 415.25 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P100-SXM2... Off | 00000000:5E:00.0 Off | 0 |
| N/A 39C P0 50W / 300W | 15633MiB / 16280MiB | 22% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla P100-SXM2... Off | 00000000:86:00.0 Off | 0 |
| N/A 35C P0 39W / 300W | 357MiB / 16280MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
A simple program to check if GPU are used
This simple program can be run even in a Notebook:
import
tensorflow as
tf
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
And this is an excerpt of the expected output
incarnation: 12127659845125631120 physical_device_desc: "device: XLA_CPU device", name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 17934050097490532938 physical_device_desc: "device: XLA_GPU device", name: "/device:XLA_GPU:1" device_type: "XLA_GPU" memory_limit: 17179869184 locality { }