官术网_书友最值得收藏!

Tensors on GPU

We have learned how to represent different forms of data in tensor representation. Some of the common operations we perform once we have data in the form of tensors are addition, subtraction, multiplication, dot product, and matrix multiplication. All of these operations can be either performed on the CPU or the GPU. PyTorch provides a simple function called cuda() to copy a tensor on the CPU to the GPU. We will take a look at some of the operations and compare the performance between matrix multiplication operations on the CPU and GPU.

Tensor addition can be obtained by using the following code:

#Various ways you can perform tensor addition
a = torch.rand(2,2)
b = torch.rand(2,2)
c = a + b
d = torch.add(a,b)
#For in-place addition
a.add_(5)

#Multiplication of different tensors

a*b
a.mul(b)
#For in-place multiplication
a.mul_(b)

For tensor matrix multiplication, lets compare the code performance on CPU and GPU. Any tensor can be moved to the GPU by calling the .cuda() function.

Multiplication on the GPU runs as follows:

a = torch.rand(10000,10000)
b = torch.rand(10000,10000)

a.matmul(b)

Time taken: 3.23 s

#Move the tensors to GPU
a = a.cuda()
b = b.cuda()

a.matmul(b)

Time taken: 11.2 μs

These fundamental operations of addition, subtraction, and matrix multiplication can be used to build complex operations, such as a Convolution Neural Network (CNN) and a recurrent neural network (RNN), which we will learn about in the later chapters of the book. 

主站蜘蛛池模板: 吉林市| 横山县| 化州市| 保山市| 隆回县| 西和县| 五河县| 成都市| 东山县| 祁阳县| 罗甸县| 云梦县| 屯昌县| 得荣县| 丘北县| 工布江达县| 建平县| 沙洋县| 景德镇市| 台南市| 江川县| 江川县| 石渠县| 惠州市| 清流县| 阿尔山市| 绥宁县| 鸡东县| 左贡县| 文登市| 朝阳县| 得荣县| 林口县| 姚安县| 望奎县| 泾源县| 云安县| 和龙市| 密云县| 苍溪县| 曲沃县|