- Neural Network Programming with TensorFlow
- Manpreet Singh Ghotra Rajdeep Dua
- 293字
- 2021-07-02 15:17:10
Optimizers
We will study AdamOptimizer here; TensorFlow AdamOptimizer uses Kingma and Ba's Adam algorithm to manage the learning rate. Adam has many advantages over the simple GradientDescentOptimizer. The first is that it uses moving averages of the parameters, which enables Adam to use a larger step size, and it will converge to this step size without any fine-tuning.
The disadvantage of Adam is that it requires more computation to be performed for each parameter in each training step. GradientDescentOptimizer can be used as well, but it would require more hyperparameter tuning before it would converge as quickly.
The following example shows how to use AdamOptimizer:
- tf.train.Optimizer creates an optimizer
- tf.train.Optimizer.minimize(loss, var_list) adds the optimization operation to the computation graph
Here, automatic differentiation computes gradients without user input:
import numpy as np
import seaborn
import matplotlib.pyplot as plt
import tensorflow as tf
# input dataset
xData = np.arange(100, step=.1)
yData = xData + 20 * np.sin(xData/10)
# scatter plot for input data
plt.scatter(xData, yData)
plt.show()
# defining data size and batch size
nSamples = 1000
batchSize = 100
# resize
xData = np.reshape(xData, (nSamples,1))
yData = np.reshape(yData, (nSamples,1))
# input placeholders
x = tf.placeholder(tf.float32, shape=(batchSize, 1))
y = tf.placeholder(tf.float32, shape=(batchSize, 1))
# init weight and bias
with tf.variable_scope("linearRegression"):
W = tf.get_variable("weights", (1, 1), initializer=tf.random_normal_initializer())
b = tf.get_variable("bias", (1,), initializer=tf.constant_initializer(0.0))
y_pred = tf.matmul(x, W) + b
loss = tf.reduce_sum((y - y_pred)**2/nSamples)
# optimizer
opt = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# gradient descent loop for 500 steps
for _ in range(500):
# random minibatch
indices = np.random.choice(nSamples, batchSize)
X_batch, y_batch = xData[indices], yData[indices]
# gradient descent step
_, loss_val = sess.run([opt, loss], feed_dict={x: X_batch, y: y_batch})
Here is the scatter plot for the dataset:

This is the plot of the learned model on the data:

- 數(shù)據(jù)庫(kù)基礎(chǔ)教程(SQL Server平臺(tái))
- 數(shù)據(jù)產(chǎn)品經(jīng)理高效學(xué)習(xí)手冊(cè):產(chǎn)品設(shè)計(jì)、技術(shù)常識(shí)與機(jī)器學(xué)習(xí)
- 計(jì)算機(jī)綜合設(shè)計(jì)實(shí)驗(yàn)指導(dǎo)
- 在你身邊為你設(shè)計(jì)Ⅲ:騰訊服務(wù)設(shè)計(jì)思維與實(shí)戰(zhàn)
- 新型數(shù)據(jù)庫(kù)系統(tǒng):原理、架構(gòu)與實(shí)踐
- Python廣告數(shù)據(jù)挖掘與分析實(shí)戰(zhàn)
- R數(shù)據(jù)科學(xué)實(shí)戰(zhàn):工具詳解與案例分析(鮮讀版)
- 數(shù)據(jù)庫(kù)技術(shù)及應(yīng)用教程
- 圖數(shù)據(jù)實(shí)戰(zhàn):用圖思維和圖技術(shù)解決復(fù)雜問(wèn)題
- 新基建:數(shù)據(jù)中心創(chuàng)新之路
- R語(yǔ)言數(shù)據(jù)挖掘
- 智慧的云計(jì)算
- 大數(shù)據(jù)計(jì)算系統(tǒng)原理、技術(shù)與應(yīng)用
- Scratch Cookbook
- Nagios Core Administrators Cookbook