官术网_书友最值得收藏!

Our first neural network

We present our first neural network, which learns how to map training examples (input array) to targets (output array). Let's assume that we work for one of the largest online companies, Wondermovies, which serves videos on demand. Our training dataset contains a feature that represents the average hours spent by users watching movies on the platform and we would like to predict how much time each user would spend on the platform in the coming week. It's just an imaginary use case, don't think too much about it. Some of the high-level activities for building such a solution are as follows:

  • Data preparation: The get_data function prepares the tensors (arrays) containing input and output data
  • Creating learnable parameters: The get_weights function provides us with tensors containing random values that we will optimize to solve our problem
  • Network model: The simple_network function produces the output for the input data, applying a linear rule, multiplying weights with input data, and adding the bias term (y = Wx+b)
  • Loss: The loss_fn function provides information about how good the model is
  • Optimizer: The optimize function helps us in adjusting random weights created initially to help the model calculate target values more accurately

If you are new to machine learning, do not worry, as we will understand exactly what each function does by the end of the chapter. The following functions abstract away PyTorch code to make it easier for us to understand. We will dive deep into each of these functionalities in detail. The aforementioned high level activities are common for most machine learning and deep learning problems. Later chapters in the book discuss techniques that can be used to improve each function to build useful applications.

Lets consider following linear regression equation for our neural network:

Let's write our first neural network in PyTorch:

x,y = get_data() # x - represents training data,y -                 represents target variables

w,b = get_weights() # w,b - Learnable parameters

for i in range(500):
y_pred = simple_network(x) # function which computes wx + b
loss = loss_fn(y,y_pred) # calculates sum of the squared differences of y and y_pred

if i % 50 == 0:
print(loss)
optimize(learning_rate) # Adjust w,b to minimize the loss

By the end of this chapter, you will have an idea of what is happening inside each function.

主站蜘蛛池模板: 青田县| 新乡市| 石嘴山市| 克山县| 平陆县| 蕲春县| 和顺县| 大竹县| 永康市| 神木县| 台山市| 兴安盟| 宣威市| 华容县| 大连市| 临沭县| 丘北县| 南宁市| 都匀市| 玛多县| 交城县| 绥滨县| 黑龙江省| 炎陵县| 白朗县| 乌鲁木齐县| 霍山县| 方城县| 南漳县| 云梦县| 瑞金市| 济南市| 监利县| 禹州市| 姜堰市| 二连浩特市| 吉林省| 色达县| 神农架林区| 正蓝旗| 廉江市|