官术网_书友最值得收藏!

Slicing and indexing and reshaping

torch.Tensor have most of the attributes and functionality of NumPy. For example, we can slice and index tensors in the same way as NumPy arrays:

Here, we have printed out the first element of x, written as x0, and in the second example, we have printed out a slice of the second element of x; in this case, x11 and x12.

If you have not come across slicing and indexing, you may want to look at this again. Note that indexing begins at 0, not 1, and we have kept our subscript notation consistent with this. Notice also that the slice [1][0:2] is the elements x10 and x11, inclusive. It excludes the ending index, index 2, specified in the slice. 

We can can create a reshaped copy of an existing tensor using the view() function. The following are three examples:

It is pretty clear what (3,2) and (6,1) do, but what about the –1 in the first example? This is useful if you know how many columns you require, but do not know how many rows this will fit into. Indicating –1 here is telling PyTorch to calculate the number of rows required. Using it without another dimension simply creates a tensor of a single row. You could rewrite example two mentioned previously, as follows, if you did not know the input tensor's shape but know that it needs to have three rows:

An important operation is swapping axes or transposing. For a two-dimensional tensor, we a can use tensor.transpose(), passing it the axis we want to transpose. In this example, the original 2 x 3 tensor becomes a 3 x 2 tensor. The rows simply become the columns:

In PyTorch, transpose() can only swap two axes at once. We could use transpose in multiple steps; however, a more convenient way is to use permute(), passing it the axes we want to swap. The following example should make this clear:

When we are considering tensors in two dimensions, we can visualize them as flat tables. When we move to higher dimensions, this visual representation becomes impossible. We simply run out of spatial dimensions. Part of the magic of deep learning is that it does not matter much in terms of the mathematics involved. Real-world features are each encoded into a dimension of a data structure. So, we may be dealing with tensors of potentially thousands of dimensions. Although it might be disconcerting, most of the ideas that can be illustrated in two or three dimensions work just as well in higher dimensions.

主站蜘蛛池模板: 枣强县| 广昌县| 博白县| 横峰县| 蓬莱市| 日土县| 白玉县| 肥城市| 阜新市| 桑日县| 日照市| 玛纳斯县| 沽源县| 弥勒县| 巢湖市| 方山县| 清远市| 宝应县| 油尖旺区| 定边县| 邹平县| 汉阴县| 永泰县| 龙口市| 惠东县| 芷江| 湘乡市| 常熟市| 安陆市| 千阳县| 元氏县| 新民市| 隆安县| 和林格尔县| 丹棱县| 松溪县| 东海县| 准格尔旗| 遵义市| 江都市| 潜山县|