- Deep Learning with PyTorch
- Vishnu Subramanian
- 116字
- 2021-06-24 19:16:25
Creating learnable parameters
In our neural network example, we have two learnable parameters, w and b, and two fixed parameters, x and y. We have created variables x and y in our get_data function. Learnable parameters are created using random initialization and have the require_grad parameter set to True, unlike x and y, where it is set to False. There are different practices for initializing learnable parameters, which we will explore in the coming chapters. Let's take a look at our get_weights function:
def get_weights():
w = Variable(torch.randn(1),requires_grad = True)
b = Variable(torch.randn(1),requires_grad=True)
return w,b
Most of the preceding code is self-explanatory; torch.randn creates a random value of any given shape.
推薦閱讀
- ATmega16單片機項目驅動教程
- Cortex-M3 + μC/OS-II嵌入式系統開發入門與應用
- 電腦軟硬件維修大全(實例精華版)
- 數字邏輯(第3版)
- 從零開始學51單片機C語言
- Learning Stencyl 3.x Game Development Beginner's Guide
- 基于Apache Kylin構建大數據分析平臺
- 數字媒體專業英語(第2版)
- Blender Game Engine:Beginner's Guide
- 嵌入式系統原理及應用:基于ARM Cortex-M4體系結構
- Istio實戰指南
- 單片機項目設計教程
- Angular 6 by Example
- The Reinforcement Learning Workshop
- 多媒體應用技術(第2版)