官术网_书友最值得收藏!

PyTorch non-linear activations

PyTorch has most of the common non-linear activation functions implemented for us already and it can be used like any other layer. Let's see a quick example of how to use the ReLU function in PyTorch:

sample_data = Variable(torch.Tensor([[1,2,-1,-1]]))
myRelu = ReLU()
myRelu(sample_data)

Output:

Variable containing:
1 2 0 0
[torch.FloatTensor of size 1x4]

In the preceding example, we take a tensor with two positive values and two negative values and apply a ReLU on it, which thresholds the negative numbers to 0 and retains the positive numbers as they are.

Now we have covered most of the details required for building a network architecture, let's build a deep learning architecture that can be used to solve real-world problems. In the previous chapter, we used a simple approach so that we could focus only on how a deep learning algorithm works. We will not be using that style to build our architecture anymore; rather, we will be building the architecture in the way it is supposed to be built in PyTorch.

主站蜘蛛池模板: 霍城县| 霞浦县| 永州市| 犍为县| 本溪市| 惠水县| 明水县| 平远县| 延川县| 台湾省| 静安区| 平阴县| 哈密市| 手机| 阿尔山市| 鄢陵县| 昭通市| 井陉县| 东乡县| 华坪县| 蓬安县| 思南县| 普宁市| 灵璧县| 富裕县| 壤塘县| 三亚市| 佛山市| 漯河市| 东乌珠穆沁旗| 洛阳市| 全州县| 珠海市| 准格尔旗| 建宁县| 灵丘县| 张家界市| 福鼎市| 凤庆县| 龙游县| 大丰市|