- Deep Learning with PyTorch
- Vishnu Subramanian
- 174字
- 2021-06-24 19:16:28
PyTorch non-linear activations
PyTorch has most of the common non-linear activation functions implemented for us already and it can be used like any other layer. Let's see a quick example of how to use the ReLU function in PyTorch:
sample_data = Variable(torch.Tensor([[1,2,-1,-1]]))
myRelu = ReLU()
myRelu(sample_data)
Output:
Variable containing:
1 2 0 0
[torch.FloatTensor of size 1x4]
In the preceding example, we take a tensor with two positive values and two negative values and apply a ReLU on it, which thresholds the negative numbers to 0 and retains the positive numbers as they are.
Now we have covered most of the details required for building a network architecture, let's build a deep learning architecture that can be used to solve real-world problems. In the previous chapter, we used a simple approach so that we could focus only on how a deep learning algorithm works. We will not be using that style to build our architecture anymore; rather, we will be building the architecture in the way it is supposed to be built in PyTorch.
- Learning AngularJS Animations
- 顯卡維修知識精解
- 計算機組裝與系統配置
- Getting Started with Qt 5
- Intel FPGA/CPLD設計(高級篇)
- 3ds Max Speed Modeling for 3D Artists
- OpenGL Game Development By Example
- Intel Edison智能硬件開發指南:基于Yocto Project
- RISC-V處理器與片上系統設計:基于FPGA與云平臺的實驗教程
- 筆記本電腦芯片級維修從入門到精通(圖解版)
- Arduino項目開發:智能生活
- 嵌入式系統原理及應用:基于ARM Cortex-M4體系結構
- The Applied Artificial Intelligence Workshop
- 分布式存儲系統:核心技術、系統實現與Go項目實戰
- Advanced Machine Learning with R