官术网_书友最值得收藏!

Tensors on GPU

We have learned how to represent different forms of data in tensor representation. Some of the common operations we perform once we have data in the form of tensors are addition, subtraction, multiplication, dot product, and matrix multiplication. All of these operations can be either performed on the CPU or the GPU. PyTorch provides a simple function called cuda() to copy a tensor on the CPU to the GPU. We will take a look at some of the operations and compare the performance between matrix multiplication operations on the CPU and GPU.

Tensor addition can be obtained by using the following code:

#Various ways you can perform tensor addition
a = torch.rand(2,2)
b = torch.rand(2,2)
c = a + b
d = torch.add(a,b)
#For in-place addition
a.add_(5)

#Multiplication of different tensors

a*b
a.mul(b)
#For in-place multiplication
a.mul_(b)

For tensor matrix multiplication, lets compare the code performance on CPU and GPU. Any tensor can be moved to the GPU by calling the .cuda() function.

Multiplication on the GPU runs as follows:

a = torch.rand(10000,10000)
b = torch.rand(10000,10000)

a.matmul(b)

Time taken: 3.23 s

#Move the tensors to GPU
a = a.cuda()
b = b.cuda()

a.matmul(b)

Time taken: 11.2 μs

These fundamental operations of addition, subtraction, and matrix multiplication can be used to build complex operations, such as a Convolution Neural Network (CNN) and a recurrent neural network (RNN), which we will learn about in the later chapters of the book. 

主站蜘蛛池模板: 丰宁| 普兰县| 调兵山市| 左权县| 太保市| 炎陵县| 手游| 重庆市| 二连浩特市| 会理县| 阜宁县| 曲麻莱县| 平武县| 东方市| 屯昌县| 宁强县| 兰西县| 连州市| 香格里拉县| 荣昌县| 祁门县| 揭阳市| 灵山县| 犍为县| 绿春县| 裕民县| 岐山县| 忻城县| 衢州市| 武汉市| 兴化市| 乌鲁木齐市| 洛扎县| 阳江市| 神木县| 天门市| 阳原县| 乌鲁木齐市| 惠州市| 遂平县| 丹凤县|