A replacement for tf.Variable which follows initial value placement.
Inherits From: Variable
tf.experimental.dtensor.DVariable(
initial_value, *args, dtype=None, **kwargs
)
| Used in the guide | Used in the tutorials |
|---|---|
The class also handles restore/save operations in DTensor. Note that, DVariable may fall back to normal tf.Variable at this moment if initial_value is not a DTensor.
| Attributes | |
|---|---|
aggregation | |
constraint | Returns the constraint function associated with this variable. |
create | The op responsible for initializing this variable. |
device | The device this variable is on. |
dtype | The dtype of this variable. |
graph | The Graph of this variable. |
handle | The handle by which this variable can be accessed. |
initial_value | Returns the Tensor used as the initial value for the variable. |
initializer | The op responsible for initializing this variable. |
name | The name of the handle for this variable. |
op | The op for this variable. |
save_as_bf16 | |
shape | The shape of this variable. |
synchronization | |
trainable | |
assignassign(
value, use_locking=None, name=None, read_value=True
)
Assigns a new value to this variable.
| Args | |
|---|---|
value | A Tensor. The new value for this variable. |
use_locking | If True, use locking during the assignment. |
name | The name to use for the assignment. |
read_value | A bool. Whether to read and return the new value of the variable or not. |
| Returns | |
|---|---|
If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None. |
assign_addassign_add(
delta, use_locking=None, name=None, read_value=True
)
Adds a value to this variable.
| Args | |
|---|---|
delta | A Tensor. The value to add to this variable. |
use_locking | If True, use locking during the operation. |
name | The name to use for the operation. |
read_value | A bool. Whether to read and return the new value of the variable or not. |
| Returns | |
|---|---|
If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None. |
assign_subassign_sub(
delta, use_locking=None, name=None, read_value=True
)
Subtracts a value from this variable.
| Args | |
|---|---|
delta | A Tensor. The value to subtract from this variable. |
use_locking | If True, use locking during the operation. |
name | The name to use for the operation. |
read_value | A bool. Whether to read and return the new value of the variable or not. |
| Returns | |
|---|---|
If read_value is True, this method will return the new value of the variable after the assignment has completed. Otherwise, when in graph mode it will return the Operation that does the assignment, and when in eager mode it will return None. |
batch_scatter_updatebatch_scatter_update(
sparse_delta, use_locking=False, name=None
)
Assigns tf.IndexedSlices to this variable batch-wise.
Analogous to batch_gather. This assumes that this variable and the sparse_delta IndexedSlices have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:
num_prefix_dims = sparse_delta.indices.ndims - 1 batch_dim = num_prefix_dims + 1 sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[ batch_dim:]
where
sparse_delta.updates.shape[:num_prefix_dims] == sparse_delta.indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims]
And the operation performed can be expressed as:
var[i_1, ..., i_n, sparse_delta.indices[i_1, ..., i_n, j]] = sparse_delta.updates[ i_1, ..., i_n, j]
When sparse_delta.indices is a 1D tensor, this operation is equivalent to scatter_update.
To avoid this operation one can looping over the first ndims of the variable and using scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to be assigned to this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
count_up_tocount_up_to(
limit
)
Increments this variable until it reaches limit. (deprecated)
When that Op is run it tries to increment the variable by 1. If incrementing the variable would bring it above limit then the Op raises the exception OutOfRangeError.
If no error is raised, the Op outputs the value of the variable before the increment.
This is essentially a shortcut for count_up_to(self, limit).
| Args | |
|---|---|
limit | value at which incrementing the variable raises an error. |
| Returns | |
|---|---|
A Tensor that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct. |
evaleval(
session=None
)
Evaluates and returns the value of this variable.
experimental_refexperimental_ref()
DEPRECATED FUNCTION
from_proto@staticmethod
from_proto(
variable_def, import_scope=None
)
Returns a Variable object created from variable_def.
gather_ndgather_nd(
indices, name=None
)
Reads the value of this variable sparsely, using gather_nd.
get_shapeget_shape() -> tf.TensorShape
Alias of Variable.shape.
initialized_valueinitialized_value()
Returns the value of the initialized variable. (deprecated)
You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable.
# Initialize 'v' with a random tensor. v = tf.Variable(tf.random.truncated_normal([10, 40])) # Use `initialized_value` to guarantee that `v` has been # initialized before its value is used to initialize `w`. # The random values are picked only once. w = tf.Variable(v.initialized_value() * 2.0)
| Returns | |
|---|---|
A Tensor holding the value of this variable after its initializer has run. |
is_initializedis_initialized(
name=None
)
Checks whether a resource variable has been initialized.
Outputs boolean scalar indicating whether the tensor has been initialized.
| Args | |
|---|---|
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor of type bool. |
loadload(
value, session=None
)
Load new value into this variable. (deprecated)
Writes new value to variable's memory. Doesn't add ops to the graph.
This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions.
v = tf.Variable([1, 2])
init = tf.compat.v1.global_variables_initializer()
with tf.compat.v1.Session() as sess:
sess.run(init)
# Usage passing the session explicitly.
v.load([2, 3], sess)
print(v.eval(sess)) # prints [2 3]
# Usage with the default session. The 'with' block
# above makes 'sess' the default session.
v.load([3, 4], sess)
print(v.eval()) # prints [3 4]
| Args | |
|---|---|
value | New variable value |
session | The session to use to evaluate this variable. If none, the default session is used. |
| Raises | |
|---|---|
ValueError | Session is not passed and no default session |
numpynumpy()
read_valueread_value()
Constructs an op which reads the value of this variable.
Should be used when there are multiple reads, or when it is desirable to read the value only after some condition is true.
| Returns | |
|---|---|
| The value of the variable. |
read_value_no_copyread_value_no_copy()
Constructs an op which reads the value of this variable without copy.
The variable is read without making a copy even when it has been sparsely accessed. Variables in copy-on-read mode will be converted to copy-on-write mode.
| Returns | |
|---|---|
| The value of the variable. |
refref()
Returns a hashable reference object to this Variable.
The primary use case for this API is to put variables in a set/dictionary. We can't put variables in a set/dictionary as variable.__hash__() is no longer available starting Tensorflow 2.0.
The following will raise an exception starting 2.0
x = tf.Variable(5)
y = tf.Variable(10)
z = tf.Variable(10)
variable_set = {x, y, z}
Traceback (most recent call last):
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.
variable_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.Instead, we can use variable.ref().
variable_set = {x.ref(), y.ref(), z.ref()}
x.ref() in variable_set
True
variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
variable_dict[y.ref()]
'ten'Also, the reference object provides .deref() function that returns the original Variable.
x = tf.Variable(5) x.ref().deref() <tf.Variable 'Variable:0' shape=() dtype=int32, numpy=5>
scatter_addscatter_add(
sparse_delta, use_locking=False, name=None
)
Adds tf.IndexedSlices to this variable.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to be added to this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_divscatter_div(
sparse_delta, use_locking=False, name=None
)
Divide this variable by tf.IndexedSlices.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to divide this variable by. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_maxscatter_max(
sparse_delta, use_locking=False, name=None
)
Updates this variable with the max of tf.IndexedSlices and itself.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to use as an argument of max with this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_minscatter_min(
sparse_delta, use_locking=False, name=None
)
Updates this variable with the min of tf.IndexedSlices and itself.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to use as an argument of min with this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_mulscatter_mul(
sparse_delta, use_locking=False, name=None
)
Multiply this variable by tf.IndexedSlices.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to multiply this variable by. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_nd_addscatter_nd_add(
indices, updates, name=None
)
Applies sparse addition to individual values or slices in a Variable.
ref is a Tensor with rank P and indices is a Tensor of rank Q.
indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.
The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.
updates is Tensor of rank Q-1+P-K with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) add = ref.scatter_nd_add(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(add)
The resulting update to ref would look like this:
[1, 13, 3, 14, 14, 6, 7, 20]
See tf.scatter_nd for more details about how to make updates to slices.
| Args | |
|---|---|
indices | The indices to be used in the operation. |
updates | The values to be used in the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
scatter_nd_maxscatter_nd_max(
indices, updates, name=None
)
Updates this variable with the max of tf.IndexedSlices and itself.
ref is a Tensor with rank P and indices is a Tensor of rank Q.
indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.
The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.
updates is Tensor of rank Q-1+P-K with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
See tf.scatter_nd for more details about how to make updates to slices.
| Args | |
|---|---|
indices | The indices to be used in the operation. |
updates | The values to be used in the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
scatter_nd_minscatter_nd_min(
indices, updates, name=None
)
Updates this variable with the min of tf.IndexedSlices and itself.
ref is a Tensor with rank P and indices is a Tensor of rank Q.
indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.
The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.
updates is Tensor of rank Q-1+P-K with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
See tf.scatter_nd for more details about how to make updates to slices.
| Args | |
|---|---|
indices | The indices to be used in the operation. |
updates | The values to be used in the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
scatter_nd_subscatter_nd_sub(
indices, updates, name=None
)
Applies sparse subtraction to individual values or slices in a Variable.
ref is a Tensor with rank P and indices is a Tensor of rank Q.
indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.
The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.
updates is Tensor of rank Q-1+P-K with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = ref.scatter_nd_sub(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op)
The resulting update to ref would look like this:
[1, -9, 3, -6, -6, 6, 7, -4]
See tf.scatter_nd for more details about how to make updates to slices.
| Args | |
|---|---|
indices | The indices to be used in the operation. |
updates | The values to be used in the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
scatter_nd_updatescatter_nd_update(
indices, updates, name=None
)
Applies sparse assignment to individual values or slices in a Variable.
ref is a Tensor with rank P and indices is a Tensor of rank Q.
indices must be integer tensor, containing indices into ref. It must be shape [d_0, ..., d_{Q-2}, K] where 0 < K <= P.
The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the Kth dimension of ref.
updates is Tensor of rank Q-1+P-K with shape:
[d_0, ..., d_{Q-2}, ref.shape[K], ..., ref.shape[P-1]].
For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:
ref = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = ref.scatter_nd_update(indices, updates) with tf.compat.v1.Session() as sess: print sess.run(op)
The resulting update to ref would look like this:
[1, 11, 3, 10, 9, 6, 7, 12]
See tf.scatter_nd for more details about how to make updates to slices.
| Args | |
|---|---|
indices | The indices to be used in the operation. |
updates | The values to be used in the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
scatter_subscatter_sub(
sparse_delta, use_locking=False, name=None
)
Subtracts tf.IndexedSlices from this variable.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to be subtracted from this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
scatter_updatescatter_update(
sparse_delta, use_locking=False, name=None
)
Assigns tf.IndexedSlices to this variable.
| Args | |
|---|---|
sparse_delta | tf.IndexedSlices to be assigned to this variable. |
use_locking | If True, use locking during the operation. |
name | the name of the operation. |
| Returns | |
|---|---|
| The updated variable. |
| Raises | |
|---|---|
TypeError | if sparse_delta is not an IndexedSlices. |
set_shapeset_shape(
shape
)
Overrides the shape for this variable.
| Args | |
|---|---|
shape | the TensorShape representing the overridden shape. |
sparse_readsparse_read(
indices, name=None
)
Reads the value of this variable sparsely, using gather.
to_prototo_proto(
export_scope=None
)
Converts a ResourceVariable to a VariableDef protocol buffer.
| Args | |
|---|---|
export_scope | Optional string. Name scope to remove. |
| Raises | |
|---|---|
RuntimeError | If run in EAGER mode. |
| Returns | |
|---|---|
A VariableDef protocol buffer, or None if the Variable is not in the specified name scope. |
valuevalue()
A cached operation which reads the value of this variable.
__abs____abs__(
name=None
)
__add____add__(
y
)
__and____and__(
y
)
__array____array__(
dtype=None
)
Allows direct conversion to a numpy array.
np.array(tf.Variable([1.0])) array([1.], dtype=float32)
| Returns | |
|---|---|
| The variable value as a numpy array. |
__bool____bool__()
__div____div__(
y
)
__eq____eq__(
other
)
Compares two variables element-wise for equality.
__floordiv____floordiv__(
y
)
__ge____ge__(
y: Annotated[Any, tf.raw_ops.Any],
name=None
) -> Annotated[Any, tf.raw_ops.Any]
Returns the truth value of (x >= y) element-wise.
Note: math.greater_equal supports broadcasting. More about broadcasting here
x = tf.constant([5, 4, 6, 7]) y = tf.constant([5, 2, 5, 10]) tf.math.greater_equal(x, y) ==> [True, True, True, False] x = tf.constant([5, 4, 6, 7]) y = tf.constant([5]) tf.math.greater_equal(x, y) ==> [True, False, True, True]
| Args | |
|---|---|
x | A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. |
y | A Tensor. Must have the same type as x. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor of type bool. |
__getitem____getitem__(
slice_spec
)
Creates a slice helper object given a variable.
This allows creating a sub-tensor from part of the current contents of a variable. See tf.Tensor.getitem for detailed examples of slicing.
This function in addition also allows assignment to a sliced range. This is similar to __setitem__ functionality in Python. However, the syntax is different so that the user can capture the assignment operation for grouping or passing to sess.run() in TF1. For example,
import tensorflow as tf A = tf.Variable([[1,2,3], [4,5,6], [7,8,9]], dtype=tf.float32) print(A[:2, :2]) # => [[1,2], [4,5]] A[:2,:2].assign(22. * tf.ones((2, 2)))) print(A) # => [[22, 22, 3], [22, 22, 6], [7,8,9]]
Note that assignments currently do not support NumPy broadcasting semantics.
| Args | |
|---|---|
var | An ops.Variable object. |
slice_spec | The arguments to Tensor.getitem. |
| Returns | |
|---|---|
The appropriate slice of "tensor", based on "slice_spec". As an operator. The operator also has a assign() method that can be used to generate an assignment operator. |
| Raises | |
|---|---|
ValueError | If a slice range is negative size. |
TypeError | TypeError: If the slice indices aren't int, slice, ellipsis, tf.newaxis or int32/int64 tensors. |
__gt____gt__(
y: Annotated[Any, tf.raw_ops.Any],
name=None
) -> Annotated[Any, tf.raw_ops.Any]
Returns the truth value of (x > y) element-wise.
Note: math.greater supports broadcasting. More about broadcasting here
x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True] x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True]
| Args | |
|---|---|
x | A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. |
y | A Tensor. Must have the same type as x. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor of type bool. |
__invert____invert__(
name=None
)
__iter____iter__()
When executing eagerly, iterates over the value of the variable.
__le____le__(
y: Annotated[Any, tf.raw_ops.Any],
name=None
) -> Annotated[Any, tf.raw_ops.Any]
Returns the truth value of (x <= y) element-wise.
Note: math.less_equal supports broadcasting. More about broadcasting here
x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less_equal(x, y) ==> [True, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 6]) tf.math.less_equal(x, y) ==> [True, True, True]
| Args | |
|---|---|
x | A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. |
y | A Tensor. Must have the same type as x. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor of type bool. |
__lt____lt__(
y: Annotated[Any, tf.raw_ops.Any],
name=None
) -> Annotated[Any, tf.raw_ops.Any]
Returns the truth value of (x < y) element-wise.
Note: math.less supports broadcasting. More about broadcasting here
x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less(x, y) ==> [False, True, False] x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 7]) tf.math.less(x, y) ==> [False, True, True]
| Args | |
|---|---|
x | A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. |
y | A Tensor. Must have the same type as x. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor of type bool. |
__matmul____matmul__(
y
)
__mod____mod__(
y
)
__mul____mul__(
y
)
__ne____ne__(
other
)
Compares two variables element-wise for equality.
__neg____neg__(
name=None
) -> Annotated[Any, tf.raw_ops.Any]
Computes numerical negative value element-wise.
I.e., \(y = -x\).
| Args | |
|---|---|
x | A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor. Has the same type as x. |
__nonzero____nonzero__()
__or____or__(
y
)
__pow____pow__(
y
)
__radd____radd__(
x
)
__rand____rand__(
x
)
__rdiv____rdiv__(
x
)
__rfloordiv____rfloordiv__(
x
)
__rmatmul____rmatmul__(
x
)
__rmod____rmod__(
x
)
__rmul____rmul__(
x
)
__ror____ror__(
x
)
__rpow____rpow__(
x
)
__rsub____rsub__(
x
)
__rtruediv____rtruediv__(
x
)
__rxor____rxor__(
x
)
__sub____sub__(
y
)
__truediv____truediv__(
y
)
__xor____xor__(
y
)
© 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/experimental/dtensor/DVariable