- Mastering TensorFlow 1.x
- Armando Fandango
- 297字
- 2021-06-25 22:50:57
Placing graph nodes on specific compute devices
Let us enable the logging of variable placement by defining a config object, set the log_device_placement property to true, and then pass this config object to the session as follows:
tf.reset_default_graph()
# Define model parameters
w = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
# Define model input and output
x = tf.placeholder(tf.float32)
y = w * x + b
config = tf.ConfigProto()
config.log_device_placement=True
with tf.Session(config=config) as tfs:
# initialize and print the variable y
tfs.run(global_variables_initializer())
print('output',tfs.run(y,{x:[1,2,3,4]}))
We get the following output in Jupyter Notebook console:
b: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0
b/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0
b/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0
w: (VariableV2): /job:localhost/replica:0/task:0/device:GPU:0
w/read: (Identity): /job:localhost/replica:0/task:0/device:GPU:0
mul: (Mul): /job:localhost/replica:0/task:0/device:GPU:0
add: (Add): /job:localhost/replica:0/task:0/device:GPU:0
w/Assign: (Assign): /job:localhost/replica:0/task:0/device:GPU:0
init: (NoOp): /job:localhost/replica:0/task:0/device:GPU:0
x: (Placeholder): /job:localhost/replica:0/task:0/device:GPU:0
b/initial_value: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Const_1: (Const): /job:localhost/replica:0/task:0/device:GPU:0
w/initial_value: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Const: (Const): /job:localhost/replica:0/task:0/device:GPU:0
Thus by default, the TensorFlow creates the variable and operations nodes on a device where it can get the highest performance. The variables and operations can be placed on specific devices by using tf.device() function. Let us place the graph on the CPU:
tf.reset_default_graph()
with tf.device('/device:CPU:0'):
# Define model parameters
w = tf.get_variable(name='w',initializer=[.3], dtype=tf.float32)
b = tf.get_variable(name='b',initializer=[-.3], dtype=tf.float32)
# Define model input and output
x = tf.placeholder(name='x',dtype=tf.float32)
y = w * x + b
config = tf.ConfigProto()
config.log_device_placement=True
with tf.Session(config=config) as tfs:
# initialize and print the variable y
tfs.run(tf.global_variables_initializer())
print('output',tfs.run(y,{x:[1,2,3,4]}))
In the Jupyter console we see that now the variables have been placed on the CPU and the execution also takes place on the CPU:
b: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0
b/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0
b/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0
w: (VariableV2): /job:localhost/replica:0/task:0/device:CPU:0
w/read: (Identity): /job:localhost/replica:0/task:0/device:CPU:0
mul: (Mul): /job:localhost/replica:0/task:0/device:CPU:0
add: (Add): /job:localhost/replica:0/task:0/device:CPU:0
w/Assign: (Assign): /job:localhost/replica:0/task:0/device:CPU:0
init: (NoOp): /job:localhost/replica:0/task:0/device:CPU:0
x: (Placeholder): /job:localhost/replica:0/task:0/device:CPU:0
b/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0
Const_1: (Const): /job:localhost/replica:0/task:0/device:CPU:0
w/initial_value: (Const): /job:localhost/replica:0/task:0/device:CPU:0
Const: (Const): /job:localhost/replica:0/task:0/device:CPU:0
推薦閱讀
- 顯卡維修知識(shí)精解
- scikit-learn:Machine Learning Simplified
- Practical Machine Learning with R
- Creating Flat Design Websites
- Source SDK Game Development Essentials
- VMware Workstation:No Experience Necessary
- 單片機(jī)原理及應(yīng)用:基于C51+Proteus仿真
- 電腦橫機(jī)使用與維修
- IP網(wǎng)絡(luò)視頻傳輸:技術(shù)、標(biāo)準(zhǔn)和應(yīng)用
- 微服務(wù)實(shí)戰(zhàn)
- 主板維修實(shí)踐技術(shù)
- 三菱FX2N系列PLC入門(mén)與應(yīng)用實(shí)例
- 現(xiàn)場(chǎng)總線技術(shù)及應(yīng)用
- 系統(tǒng)安裝、維護(hù)及故障排除實(shí)戰(zhàn)
- Spring Cloud Alibaba大型微服務(wù)架構(gòu)項(xiàng)目實(shí)戰(zhàn)(下冊(cè))