- Deep Learning with PyTorch Quick Start Guide
- David Julian
- 440字
- 2021-07-02 15:00:11
Slicing and indexing and reshaping
torch.Tensor have most of the attributes and functionality of NumPy. For example, we can slice and index tensors in the same way as NumPy arrays:

Here, we have printed out the first element of x, written as x0, and in the second example, we have printed out a slice of the second element of x; in this case, x11 and x12.
If you have not come across slicing and indexing, you may want to look at this again. Note that indexing begins at 0, not 1, and we have kept our subscript notation consistent with this. Notice also that the slice [1][0:2] is the elements x10 and x11, inclusive. It excludes the ending index, index 2, specified in the slice.
We can can create a reshaped copy of an existing tensor using the view() function. The following are three examples:

It is pretty clear what (3,2) and (6,1) do, but what about the –1 in the first example? This is useful if you know how many columns you require, but do not know how many rows this will fit into. Indicating –1 here is telling PyTorch to calculate the number of rows required. Using it without another dimension simply creates a tensor of a single row. You could rewrite example two mentioned previously, as follows, if you did not know the input tensor's shape but know that it needs to have three rows:

An important operation is swapping axes or transposing. For a two-dimensional tensor, we a can use tensor.transpose(), passing it the axis we want to transpose. In this example, the original 2 x 3 tensor becomes a 3 x 2 tensor. The rows simply become the columns:

In PyTorch, transpose() can only swap two axes at once. We could use transpose in multiple steps; however, a more convenient way is to use permute(), passing it the axes we want to swap. The following example should make this clear:

When we are considering tensors in two dimensions, we can visualize them as flat tables. When we move to higher dimensions, this visual representation becomes impossible. We simply run out of spatial dimensions. Part of the magic of deep learning is that it does not matter much in terms of the mathematics involved. Real-world features are each encoded into a dimension of a data structure. So, we may be dealing with tensors of potentially thousands of dimensions. Although it might be disconcerting, most of the ideas that can be illustrated in two or three dimensions work just as well in higher dimensions.
- 計算機應(yīng)用
- 自動控制工程設(shè)計入門
- Verilog HDL數(shù)字系統(tǒng)設(shè)計入門與應(yīng)用實例
- 傳感器技術(shù)實驗教程
- 程序設(shè)計缺陷分析與實踐
- 自動檢測與傳感技術(shù)
- 自主研拋機器人技術(shù)
- Embedded Programming with Modern C++ Cookbook
- 四向穿梭式自動化密集倉儲系統(tǒng)的設(shè)計與控制
- 人工智能趣味入門:光環(huán)板程序設(shè)計
- 悟透AutoCAD 2009案例自學手冊
- 基于企業(yè)網(wǎng)站的顧客感知服務(wù)質(zhì)量評價理論模型與實證研究
- 液壓機智能故障診斷方法集成技術(shù)
- Mastering MongoDB 3.x
- INSTANT Adobe Story Starter