官术网_书友最值得收藏!

Tensors on GPU

We have learned how to represent different forms of data in tensor representation. Some of the common operations we perform once we have data in the form of tensors are addition, subtraction, multiplication, dot product, and matrix multiplication. All of these operations can be either performed on the CPU or the GPU. PyTorch provides a simple function called cuda() to copy a tensor on the CPU to the GPU. We will take a look at some of the operations and compare the performance between matrix multiplication operations on the CPU and GPU.

Tensor addition can be obtained by using the following code:

#Various ways you can perform tensor addition
a = torch.rand(2,2)
b = torch.rand(2,2)
c = a + b
d = torch.add(a,b)
#For in-place addition
a.add_(5)

#Multiplication of different tensors

a*b
a.mul(b)
#For in-place multiplication
a.mul_(b)

For tensor matrix multiplication, lets compare the code performance on CPU and GPU. Any tensor can be moved to the GPU by calling the .cuda() function.

Multiplication on the GPU runs as follows:

a = torch.rand(10000,10000)
b = torch.rand(10000,10000)

a.matmul(b)

Time taken: 3.23 s

#Move the tensors to GPU
a = a.cuda()
b = b.cuda()

a.matmul(b)

Time taken: 11.2 μs

These fundamental operations of addition, subtraction, and matrix multiplication can be used to build complex operations, such as a Convolution Neural Network (CNN) and a recurrent neural network (RNN), which we will learn about in the later chapters of the book. 

主站蜘蛛池模板: 洪雅县| 宜丰县| 聂荣县| 朝阳县| 拉孜县| 城市| 揭西县| 克什克腾旗| 隆子县| 武邑县| 林周县| 通化县| 寿宁县| 仁寿县| 留坝县| 宝坻区| 车致| 元朗区| 阳高县| 抚宁县| 宣恩县| 峡江县| 淮滨县| 天全县| 合阳县| 梓潼县| 道孚县| 城口县| 宜阳县| 南乐县| 巴林左旗| 贡嘎县| 县级市| 凭祥市| 新乐市| 历史| 福安市| 弥渡县| 界首市| 民乐县| 四会市|