If you define a variable with a name that has been defined before, then TensorFlow throws an exception. Hence, it is convenient to use the tf.get_variable() function instead of tf.Variable(). The function tf.get_variable() returns the existing variable with the same name if it exists, and creates the variable with the specified shape and initializer if it does not exist. For example:
w = tf.get_variable(name='w',shape=[1],dtype=tf.float32,initializer=[.3]) b = tf.get_variable(name='b',shape=[1],dtype=tf.float32,initializer=[-.3])
The initializer can be a tensor or list of values as shown in above examples or one of the inbuilt initializers:
tf.constant_initializer
tf.random_normal_initializer
tf.truncated_normal_initializer
tf.random_uniform_initializer
tf.uniform_unit_scaling_initializer
tf.zeros_initializer
tf.ones_initializer
tf.orthogonal_initializer
In distributed TensorFlow where we can run the code across machines, the tf.get_variable() gives us global variables. To get the local variables TensorFlow has a function with similar signature: tf.get_local_variable().
Sharing or Reusing Variables: Getting already-defined variables promotes reuse. However, an exception will be thrown if the reuse flags are not set by using tf.variable_scope.reuse_variable() or tf.variable.scope(reuse=True).
Now that you have learned how to define tensors, constants, operations, placeholders, and variables, let's learn about the next level of abstraction in TensorFlow, that combines these basic elements together to form a basic unit of computation, the data flow graph or computational graph.