Gets an existing local variable or creates a new one.
Compat aliases for migration
See Migration guide for more details.
tf.get_local_variable( name, shape=None, dtype=None, initializer=None, regularizer=None, trainable=False, collections=None, caching_device=None, partitioner=None, validate_shape=True, use_resource=None, custom_getter=None, constraint=None, synchronization=tf.VariableSynchronization.AUTO, aggregation=tf.VariableAggregation.NONE )
Behavior is the same as in
get_variable, except that variables are added to the
LOCAL_VARIABLES collection and
trainable is set to
False. This function prefixes the name with the current variable scope and performs reuse checks. See the Variable Scope How To for an extensive description of how reusing works. Here is a basic example:
def foo(): with tf.variable_scope("foo", reuse=tf.AUTO_REUSE): v = tf.get_variable("v", ) return v v1 = foo() # Creates v. v2 = foo() # Gets the same, existing v. assert v1 == v2
If initializer is
None (the default), the default initializer passed in the variable scope will be used. If that one is
None too, a
glorot_uniform_initializer will be used. The initializer can also be a Tensor, in which case the variable is initialized to this value and shape.
Similarly, if the regularizer is
None (the default), the default regularizer passed in the variable scope will be used (if that is
None too, then by default no regularization is performed).
If a partitioner is provided, a
PartitionedVariable is returned. Accessing this object as a
Tensor returns the shards concatenated along the partition axis.
Some useful partitioners are available. See, e.g.,
| ||The name of the new or existing variable.|
| ||Shape of the new or existing variable.|
| || Type of the new or existing variable (defaults to |
| ||Initializer for the variable if one is created. Can either be an initializer object or a Tensor. If it's a Tensor, its shape must be known unless validate_shape is False.|
| || A (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection |
| || List of graph collections keys to add the Variable to. Defaults to |
| || Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device. If not |
| || Optional callable that accepts a fully defined |
| ||If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. For this to be used the initializer must be a Tensor and not an initializer object.|
| ||If False, creates a regular Variable. If true, creates an experimental ResourceVariable instead with well-defined semantics. Defaults to False (will later change to True). When eager execution is enabled this argument is always forced to be True.|
| || Callable that takes as a first argument the true getter, and allows overwriting the internal get_variable method. The signature of |
def custom_getter(getter, name, *args, **kwargs): return getter(name + '_suffix', *args, **kwargs)
| || An optional projection function to be applied to the variable after being updated by an |
| || Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class |
| || Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class |
| The created or existing |
| || when creating a new variable and shape is not declared, when violating reuse during variable creation, or when |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.