Problem
I’m curious if Pytorch makes use of my GPU. It’s feasible to identify if the GPU is active during the process using nvidia-smi, but I’d like something written in Python.
Is there a way to accomplish this?
Asked by vinzee
Solution #1
This should work:
import torch
torch.cuda.is_available()
>>> True
torch.cuda.current_device()
>>> 0
torch.cuda.device(0)
>>> <torch.cuda.device at 0x7efce0b03be0>
torch.cuda.device_count()
>>> 1
torch.cuda.get_device_name(0)
>>> 'GeForce GTX 950M'
This tells me that CUDA is accessible and that one of your machines can utilize it (GPUs). PyTorch is now using Device 0, which is the GPU GeForce GTX 950M.
Answered by vinzee
Solution #2
I’m adding a method using torch.device because it hasn’t been proposed yet, and it’s pretty useful, especially when initializing tensors on the proper device.
# setting device on GPU if available, else CPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
print()
#Additional Info when using cuda
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
print('Memory Usage:')
print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
print('Cached: ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
Changed the name of torch.cuda.memory cached to torch.cuda.memory reserved. For previous versions, use memory cached.
Output:
Using device: cuda
Tesla K80
Memory Usage:
Allocated: 0.3 GB
Cached: 0.6 GB
As previously stated, the device can be used to:
This allows for easy switching between CPU and GPU without having to change the code.
I’m providing some additional information on cached and allocated memory because there have been some questions and confusion about it:
You can either pass over a device as described further above in the post, or you can leave it blank and the current device will be used ().
Additional information: Old graphics cards with a Cuda compute capacity of 3.0 or lower may be seen, but Pytorch cannot use them! Hekimgil was the one who pointed this out! – “GPU0 is a GeForce GT 750M with cuda 3.0 functionality. This GPU is no longer supported by PyTorch since it is too old. We support a minimum cuda capability of 3.5.”
Answered by MBT
Solution #3
If you want to manually check whether your application is utilizing GPU resources and to what extent after you start the training loop, simply type watch in the terminal as in:
$ watch -n 2 nvidia-smi
Until you press ctrl+c, the usage statistics will be updated every 2 seconds.
If you need more control on more GPU stats you might need, you can use more sophisticated version of nvidia-smi with –query-gpu=…. Here’s a simple illustration of what I’m talking about:
$ watch -n 3 nvidia-smi --query-gpu=index,gpu_name,memory.total,memory.used,memory.free,temperature.gpu,pstate,utilization.gpu,utilization.memory --format=csv
It would produce the following statistics:
Note: In —query-gpu=…., there should be no space between the comma separated query names. Those settings will be ignored if not specified, and no statistics will be returned.
You may also verify if PyTorch correctly finds your CUDA installation by running the following command:
In [13]: import torch
In [14]: torch.cuda.is_available()
Out[14]: True
True status indicates that PyTorch is correctly configured and is using the GPU, albeit you must move/place the tensors in your code using the relevant lines.
Look into this module if you want to perform this in Python code:
https://github.com/jonsafari/nvidia-ml-py alternatively download it from pypi: https://pypi.python.org/pypi/nvidia-ml-py/
Answered by kmario23
Solution #4
Just one little detour from a practical standpoint:
import torch
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
This developer now knows whether to use cuda or CPU.
When going to cuda, there is a distinction between dealing with models and tensors. At first, it seems weird.
import torch
import torch.nn as nn
dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
t1 = torch.randn(1,2)
t2 = torch.randn(1,2).to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t2) # tensor([[ 0.5117, -3.6247]], device='cuda:0')
t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]])
print(t1.is_cuda) # False
t1 = t1.to(dev)
print(t1) # tensor([[-0.2678, 1.9252]], device='cuda:0')
print(t1.is_cuda) # True
class M(nn.Module):
def __init__(self):
super().__init__()
self.l1 = nn.Linear(1,2)
def forward(self, x):
x = self.l1(x)
return x
model = M() # not on cuda
model.to(dev) # is on cuda (all parameters)
print(next(model.parameters()).is_cuda) # True
This is all complicated, and if you grasp it, you’ll be able to deal with it quickly and with less troubleshooting.
Answered by prosti
Solution #5
Check GPU for PyTorch on the office site and the get started page as follows:
import torch
torch.cuda.is_available()
Reference: PyTorch|Get Start
Answered by TimeSeam
Post is based on https://stackoverflow.com/questions/48152674/how-to-check-if-pytorch-is-using-the-gpu