- Python Deep Learning
- Ivan Vasilev Daniel Slater Gianmario Spacagna Peter Roelants Valentino Zocca
- 902字
- 2021-07-02 14:31:05
Linear regression
We have already introduced linear regression in Chapter 1, Machine Learning – an Introduction. To recap, regarding utilization of the vector notation, the output of a linear regression algorithm is a single value, y , and is equal to the dot product of the input values x and the weights w: . As we now know, linear regression is a special case of a neural network; that is, it's a single neuron with the identity activation function. In this section, we'll learn how to train linear regression with gradient descent and, in the following sections, we'll extend it to training more complex models. You can see how the gradient descent works in the following code block:

At first, this might look scary, but fear not! Behind the scenes, it's very simple and straightforward mathematics (I know that sounds even scarier!). But let's not lose sight of our goal, which is to adjust the weights, w, in a way that will help the algorithm to predict the target values. To do this, first we need to know how the output yi differs from the target value ti for each sample of the training dataset (we use superscript notation to mark the i-th sample). We'll use the mean-squared error loss function (MSE), which is equal to the mean value of the squared differences yi - ti for all samples (the total number of samples in the training set is n). We'll denote MSE with J for ease of use and, to underscore that, we can use other loss functions. Each yi is a function of w, and therefore, J is also a function of w. As we mentioned previously, the loss function J represents a hypersurface of dimension equal to the dimension of w (we are implicitly also considering the bias). To illustrate this, imagine that we have only one input value, x, and a single weight, w. We can see how the MSE changes with respect to w in the following diagram:

Our goal is to minimize J, which means finding such w, where the value of J is at its global minimum. To do this, we need to know whether J increases or decreases when we modify w, or, in other words, the first derivative (or gradient) of J with respect to w:
- In the general case, where we have multiple inputs and weights, we can calculate the partial derivative with respect to each weight wj using the following formula:

- And to move toward the minimum, we need to move in the opposite direction set by
for each wj.
- Let's calculate the derivative:

If , then
and, therefore,

- Now, that we have calculated the partial derivatives, we'll update the weights with the following update rule:

We can see that η is the learning rate. The learning rate determines the ratio by which the weight adjusts as new data arrives.
- We can write the update rule in matrix form as follows:

Here, ?, also called nabla, represents the vector of partial derivatives.

You may have noticed that in order to update the weights, we accumulate the error across all training samples. In reality, there are big datasets, and iterating over them for just one update would make training impractically slow. One solution to this problem is the stochastic (or online) gradient descent (SGD) algorithm, which works in the same way as regular gradient descent, but updates the weights after every training sample. However, SGD is prone to noise in the data. If a sample is an outlier, we risk increasing the error instead of decreasing it. A good compromise between the two is the mini-batch gradient descent, which accumulates the error for every n samples or mini-batches and performs one weight update. In practice, you'll almost always use mini-batch gradient descent.
Before we move to the next section, we should mention that besides the global minimum, the loss function might have multiple local minimums and minimizing its value is not as trivial, as in this example.
- Mastering AWS Lambda
- Python自然語言處理實戰(zhàn):核心技術(shù)與算法
- 深入理解Java7:核心技術(shù)與最佳實踐
- Python 3破冰人工智能:從入門到實戰(zhàn)
- 3D少兒游戲編程(原書第2版)
- Jupyter數(shù)據(jù)科學實戰(zhàn)
- Visual FoxPro程序設(shè)計
- Java系統(tǒng)化項目開發(fā)教程
- C#應用程序設(shè)計教程
- jQuery炫酷應用實例集錦
- 微信小程序開發(fā)與實戰(zhàn)(微課版)
- Python語言實用教程
- Java EE架構(gòu)設(shè)計與開發(fā)實踐
- Spring Boot從入門到實戰(zhàn)
- Zend Framework 2 Cookbook