- Deep Learning with PyTorch Quick Start Guide
- David Julian
- 440字
- 2021-07-02 15:00:11
Slicing and indexing and reshaping
torch.Tensor have most of the attributes and functionality of NumPy. For example, we can slice and index tensors in the same way as NumPy arrays:

Here, we have printed out the first element of x, written as x0, and in the second example, we have printed out a slice of the second element of x; in this case, x11 and x12.
If you have not come across slicing and indexing, you may want to look at this again. Note that indexing begins at 0, not 1, and we have kept our subscript notation consistent with this. Notice also that the slice [1][0:2] is the elements x10 and x11, inclusive. It excludes the ending index, index 2, specified in the slice.
We can can create a reshaped copy of an existing tensor using the view() function. The following are three examples:

It is pretty clear what (3,2) and (6,1) do, but what about the –1 in the first example? This is useful if you know how many columns you require, but do not know how many rows this will fit into. Indicating –1 here is telling PyTorch to calculate the number of rows required. Using it without another dimension simply creates a tensor of a single row. You could rewrite example two mentioned previously, as follows, if you did not know the input tensor's shape but know that it needs to have three rows:

An important operation is swapping axes or transposing. For a two-dimensional tensor, we a can use tensor.transpose(), passing it the axis we want to transpose. In this example, the original 2 x 3 tensor becomes a 3 x 2 tensor. The rows simply become the columns:

In PyTorch, transpose() can only swap two axes at once. We could use transpose in multiple steps; however, a more convenient way is to use permute(), passing it the axes we want to swap. The following example should make this clear:

When we are considering tensors in two dimensions, we can visualize them as flat tables. When we move to higher dimensions, this visual representation becomes impossible. We simply run out of spatial dimensions. Part of the magic of deep learning is that it does not matter much in terms of the mathematics involved. Real-world features are each encoded into a dimension of a data structure. So, we may be dealing with tensors of potentially thousands of dimensions. Although it might be disconcerting, most of the ideas that can be illustrated in two or three dimensions work just as well in higher dimensions.
- Canvas LMS Course Design
- Hands-On Data Science with SQL Server 2017
- 返璞歸真:UNIX技術內幕
- Zabbix Network Monitoring(Second Edition)
- 可編程控制器技術應用(西門子S7系列)
- 21天學通Visual Basic
- Blender Compositing and Post Processing
- 大數據技術與應用
- AutoCAD 2012中文版繪圖設計高手速成
- Learn CloudFormation
- 學練一本通:51單片機應用技術
- Creating ELearning Games with Unity
- 玩轉機器人:基于Proteus的電路原理仿真(移動視頻版)
- ARM嵌入式開發實例
- Linux Administration Cookbook