- TensorFlow Machine Learning Cookbook
- Nick McClure
- 628字
- 2021-04-02 20:36:28
Using Placeholders and Variables
Placeholders and variables are key tools for using computational graphs in TensorFlow. We must understand the difference and when to best use them to our advantage.
Getting ready
One of the most important distinctions to make with the data is whether it is a placeholder or a variable. Variables are the parameters of the algorithm and TensorFlow keeps track of how to change these to optimize the algorithm. Placeholders are objects that allow you to feed in data of a specific type and shape and depend on the results of the computational graph, such as the expected outcome of a computation.
How to do it…
The main way to create a variable is by using the Variable()
function, which takes a tensor as an input and outputs a variable. This is the declaration and we still need to initialize the variable. Initializing is what puts the variable with the corresponding methods on the computational graph. Here is an example of creating and initializing a variable:
my_var = tf.Variable(tf.zeros([2,3])) sess = tf.Session() initialize_op = tf.global_variables_initializer () sess.run(initialize_op)
To see what the computational graph looks like after creating and initializing a variable, see the next part in this recipe.
Placeholders are just holding the position for data to be fed into the graph. Placeholders get data from a feed_dict
argument in the session. To put a placeholder in the graph, we must perform at least one operation on the placeholder. We initialize the graph, declare x
to be a placeholder, and define y
as the identity operation on x
, which just returns x
. We then create data to feed into the x
placeholder and run the identity operation. It is worth noting that TensorFlow will not return a self-referenced placeholder in the feed dictionary. The code is shown here and the resulting graph is shown in the next section:
sess = tf.Session() x = tf.placeholder(tf.float32, shape=[2,2]) y = tf.identity(x) x_vals = np.random.rand(2,2) sess.run(y, feed_dict={x: x_vals}) # Note that sess.run(x, feed_dict={x: x_vals}) will result in a self-referencing error.
How it works…
The computational graph of initializing a variable as a tensor of zeros is shown in the following figure:

Figure 1: Variable
In Figure 1, we can see what the computational graph looks like in detail with just one variable, initialized to all zeros. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right corner. For more details on creating and visualizing graphs, see Chapter 10, Taking TensorFlow to Production , section 1.
Similarly, the computational graph of feeding a numpy
array into a placeholder can be seen in the following figure:

Figure 2: Here is the computational graph of a placeholder initialized. The grey shaded region is a very detailed view of the operations and constants involved. The main computational graph with less detail is the smaller graph outside of the grey region in the upper right.
There's more…
During the run of the computational graph, we have to tell TensorFlow when to initialize the variables we have created. TensorFlow must be informed about when it can initialize the variables. While each variable has an initializer
method, the most common way to do this is to use the helper
function, which is global_variables_initializer()
. This function creates an operation in the graph that initializes all the variables we have created, as follows:
initializer_op = tf.global_variables_initializer ()
But if we want to initialize a variable based on the results of initializing another variable, we have to initialize variables in the order we want, as follows:
sess = tf.Session() first_var = tf.Variable(tf.zeros([2,3])) sess.run(first_var.initializer) second_var = tf.Variable(tf.zeros_like(first_var)) # Depends on first_var sess.run(second_var.initializer)
- 雷達(dá)目標(biāo)特性及MATLAB仿真
- 第三代移動通信
- 5G在智能電網(wǎng)中的應(yīng)用
- 邊緣計算光網(wǎng)絡(luò)
- LTE無線網(wǎng)絡(luò)優(yōu)化實踐
- 21堂課精通電子元器件檢測
- 嵌入式系統(tǒng)及其在無線通信中的應(yīng)用開發(fā)
- 核心網(wǎng)架構(gòu)與關(guān)鍵技術(shù)
- 碳化硅功率器件:特性、測試和應(yīng)用技術(shù)
- 空間信號組合理論與關(guān)鍵技術(shù)
- 電子裝配工技能實訓(xùn)與考核指導(dǎo)(中、高級工)
- 大話常用電子電路
- 光電定位與光電對抗
- 有線數(shù)字電視網(wǎng)絡(luò)
- 微信小程序商城開發(fā):界面設(shè)計實戰(zhàn)