17. GPUs#

17.1. Computing Devices#

We can query the number of available GPUs.

using CUDA,Flux

CUDA.devices()
CUDA.DeviceIterator() for 1 devices:
0. NVIDIA GeForce GTX 970

17.2. Vectors and GPUs#

17.2.1. Storage on the GPU#

There are several ways to store a Vector on the GPU. For example, we can specify a storage device when creating a Vector. Next, we create the Vector variable X on the first gpu. The Vector created on a GPU only consumes the memory of this GPU. We can use the nvidia-smi command to view GPU memory usage. In general, we need to make sure that we do not create data that exceeds the GPU memory limit.

X = cu(ones(3,2))
3×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}:
 1.0  1.0
 1.0  1.0
 1.0  1.0

17.2.2. Neural Networks and GPUs#

Similarly, a neural network model can specify devices. The following code puts the model parameters on the GPU.

model = Chain(Dense(3=>1))
model = model |> gpu
Chain(
  Dense(3 => 1),                        # 4 parameters
) 

For example, when the input is a tensor on the GPU, the model will calculate the result on the same GPU.

model(X)
1×2 CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}:
 0.0954295  0.0954295