- Deep Learning with PyTorch
- Vishnu Subramanian
- 182字
- 2021-06-24 19:16:25
Network implementation
As we have all the parameters (x, w, b, and y) required to implement the network, we perform a matrix multiplication between w and x. Then, sum the result with b. That will give our predicted y. The function is implemented as follows:
def simple_network(x):
y_pred = torch.matmul(x,w)+b
return y_pred
PyTorch also provides a higher-level abstraction in torch.nn called layers, which will take care of most of these underlying initialization and operations associated with most of the common techniques available in the neural network. We are using the lower-level operations to understand what happens inside these functions. In later chapters, that is Chapter 5, Deep Learning for Computer Vision and Chapter 6, Deep Learning with Sequence Data and Text, we will be relying on the PyTorch abstractions to build complex neural networks or functions. The previous model can be represented as a torch.nn layer, as follows:
f = nn.Linear(17,1) # Much simpler.
Now that we have calculated the y values, we need to know how good our model is, which is done in the loss function.
- 新型電腦主板關(guān)鍵電路維修圖冊
- Android NDK Game Development Cookbook
- 辦公通信設(shè)備維修
- 深入淺出SSD:固態(tài)存儲核心技術(shù)、原理與實(shí)戰(zhàn)
- 電腦組裝、維護(hù)、維修全能一本通(全彩版)
- The Applied AI and Natural Language Processing Workshop
- 從零開始學(xué)51單片機(jī)C語言
- 計算機(jī)組裝與維修技術(shù)
- Mastering Adobe Photoshop Elements
- 超大流量分布式系統(tǒng)架構(gòu)解決方案:人人都是架構(gòu)師2.0
- 微型計算機(jī)系統(tǒng)原理及應(yīng)用:國產(chǎn)龍芯處理器的軟件和硬件集成(基礎(chǔ)篇)
- Wireframing Essentials
- 新編電腦組裝與硬件維修從入門到精通
- 單片機(jī)原理與技能訓(xùn)練
- 計算機(jī)組裝與維護(hù)(慕課版)