官术网_书友最值得收藏!

Automatic differentiation

TensorFlow provides a very convenient API that can help us to directly derive the deltas and update the network parameters:

# Define the cost as the square of the errors
cost = tf.square(error)

# The Gradient Descent Optimizer will do the heavy lifting
learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Define the function we want to approximate
def linear_fun(x):
y = x[:,0] * 2 + x[:,1] * 4 + 1
return y.reshape(y.shape[0],1)

# Other variables during learning
train_batch_size = 100
test_batch_size = 50

# Normal TensorFlow - initialize values, create a session and run the model
sess = tf.Session()
sess.run(tf.initialize_all_variables())

for i in range(1000):
x_value = np.random.rand(train_batch_size,2)
y_value = linear_fun(x_value)
sess.run(optimizer, feed_dict={a_0:x_value, y: y_value})
if i % 100 == 0:
test_x = np.random.rand(test_batch_size,2)
res_val = sess.run(res, feed_dict =
{a_0: test_x, y: linear_fun(test_x)})
print res_val

In addition to this basic setting, let’s now talk about a few important concepts you might encounter in practice.

主站蜘蛛池模板: 万全县| 昌乐县| 潢川县| 台州市| 蚌埠市| 渑池县| 芦溪县| 烟台市| 台湾省| 凤山市| 莎车县| 天等县| 嘉兴市| 武乡县| 娱乐| 稻城县| 调兵山市| 吴忠市| 清远市| 双辽市| 罗平县| 潍坊市| 犍为县| 株洲县| 庄浪县| 东乡县| 会同县| 延川县| 静安区| 石台县| 德安县| 德州市| 华蓥市| 沙田区| 尚志市| 蓬莱市| 海盐县| 大同县| 谢通门县| 从化市| 苏尼特右旗|