- Neural Networks with Keras Cookbook
- V Kishore Ayyadevara
- 483字
- 2021-07-02 12:46:28
Getting ready
In the previous chapter on building feedforward neural networks, we learned that the learning rate is used in updating weights and the change in weight is proportional to the amount of loss reduction.
Additionally, a change in a weight's value is equal to the decrease in loss multiplied by the learning rate. Hence, the lower the learning rate, the lower the change in the weight value, and vice versa.
You can essentially think of the weight values as a continuous spectrum where the weights are initialized randomly. When the change in the weight values is great, there is a good possibility that the various weight values in the spectrum are not considered. However, when the change in the weight value is slight, the weights might achieve a global minima, as more possible weight values could be considered.
To understand this further, let's consider the toy example of fitting the y = 2x line where the initial weight value is 1.477 and the initial bias value is zero. The feedforward and back propagation functions will remain the same as we saw in the previous chapter:
def feed_forward(inputs, outputs, weights):
hidden = np.dot(inputs,weights[0])
out = hidden+weights[1]
squared_error = (np.square(out - outputs))
return squared_error
def update_weights(inputs, outputs, weights, epochs, lr):
for epoch in range(epochs):
org_loss = feed_forward(inputs, outputs, weights)
wts_tmp = deepcopy(weights)
wts_tmp2 = deepcopy(weights)
for ix, wt in enumerate(weights):
print(ix, wt)
wts_tmp[-(ix+1)] += 0.0001
loss = feed_forward(inputs, outputs, wts_tmp)
del_loss = np.sum(org_loss - loss)/(0.0001*len(inputs))
wts_tmp2[-(ix+1)] += del_loss*lr
wts_tmp = deepcopy(weights)
weights = deepcopy(wts_tmp2)
return wts_tmp2
Note that the only change from the backward propagation function that we saw in the previous chapter is that we are passing the learning rate as a parameter in the preceding function. The value of weight when the learning rate is 0.01 over a different number of epochs is as follows:
w_val = []
b_val = []
for k in range(1000):
w_new, b_new = update_weights(x,y,w,(k+1),0.01)
w_val.append(w_new)
b_val.append(b_new)
The plot of the change in weight over different epochs can be obtained using the following code:
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(w_val)
plt.title('Weight value over different epochs when learning rate is 0.01')
plt.xlabel('epochs')
plt.ylabel('weight value')
plt.grid('off')
The output of the preceding code is as follows:

In a similar manner, the value of the weight over a different number of epochs when the learning rate is 0.1 is as follows:

This screenshot shows the value of the weight over a different number of epochs when the learning rate is 0.5:

Note that, in the preceding scenario, there was a drastic change in the weight values initially, and the 0.1 learning rate converged, while the 0.5 learning rate did not converge to an optimal solution, and thus became stuck in a local minima.
In the case when the learning rate was 0.5, given the weight value was stuck in a local minima, it could not reach the optimal value of two.
- Spring Boot 2實戰之旅
- 國際大學生程序設計競賽中山大學內部選拔真題解(二)
- Python數據分析入門與實戰
- Three.js開發指南:基于WebGL和HTML5在網頁上渲染3D圖形和動畫(原書第3版)
- Mastering Articulate Storyline
- Python進階編程:編寫更高效、優雅的Python代碼
- MySQL數據庫管理與開發實踐教程 (清華電腦學堂)
- 零基礎學Java程序設計
- Troubleshooting PostgreSQL
- 自然語言處理Python進階
- ScratchJr趣味編程動手玩:讓孩子用編程講故事
- Python程序設計與算法基礎教程(第2版)(微課版)
- PHP編程基礎與實踐教程
- C++ Fundamentals
- Mastering Apache Storm