W3cubDocs

/TensorFlow 2.3

tf.Tensor

View source on GitHub

A tensor is a multidimensional array of elements represented by a

tf.Tensor object. All elements are of a single known data type.

When writing a TensorFlow program, the main object that is manipulated and passed around is the tf.Tensor.

A tf.Tensor has the following properties:

  • a single data type (float32, int32, or string, for example)
  • a shape

TensorFlow supports eager execution and graph execution. In eager execution, operations are evaluated immediately. In graph execution, a computational graph is constructed for later evaluation.

TensorFlow defaults to eager execution. In the example below, the matrix multiplication results are calculated immediately.

# Compute some values using a Tensor
c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d)
print(e)
tf.Tensor(
[[1. 3.]
 [3. 7.]], shape=(2, 2), dtype=float32)

Note that during eager execution, you may discover your Tensors are actually of type EagerTensor. This is an internal detail, but it does give you access to a useful function, numpy:

type(e)
<class '...ops.EagerTensor'>
print(e.numpy())
  [[1. 3.]
   [3. 7.]]

In TensorFlow, tf.functions are a common way to define graph execution.

A Tensor's shape (that is, the rank of the Tensor and the size of each dimension) may not always be fully known. In tf.function definitions, the shape may only be partially known.

Most operations produce tensors of fully-known shapes if the shapes of their inputs are also fully known, but in some cases it's only possible to find the shape of a tensor at execution time.

A number of specialized tensors are available: see tf.Variable, tf.constant, tf.placeholder, tf.sparse.SparseTensor, and tf.RaggedTensor.

For more on Tensors, see the guide.

Args
op An Operation. Operation that computes this tensor.
value_index An int. Index of the operation's endpoint that produces this tensor.
dtype A DType. Type of elements stored in this tensor.
Raises
TypeError If the op is not an Operation.
Attributes
device The name of the device on which this tensor will be produced, or None.
dtype The DType of elements in this tensor.
graph The Graph that contains this tensor.
name The string name of this tensor.
op The Operation that produces this tensor as an output.
shape Returns a tf.TensorShape that represents the shape of this tensor.
t = tf.constant([1,2,3,4,5])
t.shape
TensorShape([5])

tf.Tensor.shape is equivalent to tf.Tensor.get_shape().

In a tf.function or when building a model using tf.keras.Input, they return the build-time shape of the tensor, which may be partially unknown.

A tf.TensorShape is not a tensor. Use tf.shape(t) to get a tensor containing the shape, calculated at runtime.

See tf.Tensor.get_shape(), and tf.TensorShape for details and examples.

value_index The index of this tensor in the outputs of its Operation.

Methods

consumers

View source

Returns a list of Operations that consume this tensor.

Returns
A list of Operations.

eval

View source

Evaluates this tensor in a Session.

Note: If you are not using compat.v1 libraries, you should not need this, (or feed_dict or Session). In eager execution (or within tf.function) you do not need to call eval.

Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.

Note: Before invoking Tensor.eval(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.
Args
feed_dict A dictionary that maps Tensor objects to feed values. See tf.Session.run for a description of the valid feed values.
session (Optional.) The Session to be used to evaluate this tensor. If none, the default session will be used.
Returns
A numpy array corresponding to the value of this tensor.

experimental_ref

View source

DEPRECATED FUNCTION

get_shape

View source

Returns a tf.TensorShape that represents the shape of this tensor.

In eager execution the shape is always fully-known.

a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
print(a.shape)
(2, 3)

tf.Tensor.get_shape() is equivalent to tf.Tensor.shape.

When executing in a tf.function or building a model using tf.keras.Input, Tensor.shape may return a partial shape (including None for unknown dimensions). See tf.TensorShape for more details.

inputs = tf.keras.Input(shape = [10])
# Unknown batch size
print(inputs.shape)
(None, 10)

The shape is computed using shape inference functions that are registered for each tf.Operation.

The returned tf.TensorShape is determined at build time, without executing the underlying kernel. It is not a tf.Tensor. If you need a shape tensor, either convert the tf.TensorShape to a tf.constant, or use the tf.shape(tensor) function, which returns the tensor's shape at execution time.

This is useful for debugging and providing early errors. For example, when tracing a tf.function, no ops are being executed, shapes may be unknown (See the Concrete Functions Guide for details).

@tf.function
def my_matmul(a, b):
  result = a@b
  # the `print` executes during tracing.
  print("Result shape: ", result.shape)
  return result

The shape inference functions propagate shapes to the extent possible:

f = my_matmul.get_concrete_function(
  tf.TensorSpec([None,3]),
  tf.TensorSpec([3,5]))
Result shape: (None, 5)

Tracing may fail if a shape missmatch can be detected:

cf = my_matmul.get_concrete_function(
  tf.TensorSpec([None,3]),
  tf.TensorSpec([4,5]))
Traceback (most recent call last):

ValueError: Dimensions must be equal, but are 3 and 4 for 'matmul' (op:
'MatMul') with input shapes: [?,3], [4,5].

In some cases, the inferred shape may have unknown dimensions. If the caller has additional information about the values of these dimensions, Tensor.set_shape() can be used to augment the inferred shape.

@tf.function
def my_fun(a):
  a.set_shape([5, 5])
  # the `print` executes during tracing.
  print("Result shape: ", a.shape)
  return a
cf = my_fun.get_concrete_function(
  tf.TensorSpec([None, None]))
Result shape: (5, 5)
Returns
A tf.TensorShape representing the shape of this tensor.

ref

View source

Returns a hashable reference object to this Tensor.

The primary use case for this API is to put tensors in a set/dictionary. We can't put tensors in a set/dictionary as tensor.__hash__() is no longer available starting Tensorflow 2.0.

The following will raise an exception starting 2.0

x = tf.constant(5)
y = tf.constant(10)
z = tf.constant(10)
tensor_set = {x, y, z}
Traceback (most recent call last):

TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
tensor_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):

TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.

Instead, we can use tensor.ref().

tensor_set = {x.ref(), y.ref(), z.ref()}
x.ref() in tensor_set
True
tensor_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
tensor_dict[y.ref()]
'ten'

Also, the reference object provides .deref() function that returns the original Tensor.

x = tf.constant(5)
x.ref().deref()
<tf.Tensor: shape=(), dtype=int32, numpy=5>

set_shape

View source

Updates the shape of this tensor.

With eager execution this operates as a shape assertion. Here the shapes match:

t = tf.constant([[1,2,3]])
t.set_shape([1, 3])

Passing a None in the new shape allows any value for that axis:

t.set_shape([1,None])

An error is raised if an incompatible shape is passed.

t.set_shape([1,5])
Traceback (most recent call last):

ValueError: Tensor's shape (1, 3) is not compatible with supplied
shape [1, 5]

When executing in a tf.function, or building a model using tf.keras.Input, Tensor.set_shape will merge the given shape with the current shape of this tensor, and set the tensor's shape to the merged value (see tf.TensorShape.merge_with for details):

t = tf.keras.Input(shape=[None, None, 3])
print(t.shape)
(None, None, None, 3)

Dimensions set to None are not updated:

t.set_shape([None, 224, 224, None])
print(t.shape)
(None, 224, 224, 3)

The main use case for this is to provide additional shape information that cannot be inferred from the graph alone.

For example if you know all the images in a dataset have shape [28,28,3] you can set it with tf.set_shape:

@tf.function
def load_image(filename):
  raw = tf.io.read_file(filename)
  image = tf.image.decode_png(raw, channels=3)
  # the `print` executes during tracing.
  print("Initial shape: ", image.shape)
  image.set_shape([28, 28, 3])
  print("Final shape: ", image.shape)
  return image

Trace the function, see the Concrete Functions Guide for details.

cf = load_image.get_concrete_function(
    tf.TensorSpec([], dtype=tf.string))
Initial shape:  (None, None, 3)
Final shape: (28, 28, 3)

Similarly the tf.io.parse_tensor function could return a tensor with any shape, even the tf.rank is unknown. If you know that all your serialized tensors will be 2d, set it with set_shape:

@tf.function
def my_parse(string_tensor):
  result = tf.io.parse_tensor(string_tensor, out_type=tf.float32)
  # the `print` executes during tracing.
  print("Initial shape: ", result.shape)
  result.set_shape([None, None])
  print("Final shape: ", result.shape)
  return result

Trace the function

concrete_parse = my_parse.get_concrete_function(
    tf.TensorSpec([], dtype=tf.string))
Initial shape:  <unknown>
Final shape:  (None, None)

Make sure it works:

t = tf.ones([5,3], dtype=tf.float32)
serialized = tf.io.serialize_tensor(t)
print(serialized.dtype)
<dtype: 'string'>
print(serialized.shape)
()
t2 = concrete_parse(serialized)
print(t2.shape)
(5, 3)
# Serialize a rank-3 tensor
t = tf.ones([5,5,5], dtype=tf.float32)
serialized = tf.io.serialize_tensor(t)
# The function still runs, even though it `set_shape([None,None])`
t2 = concrete_parse(serialized)
print(t2.shape)
(5, 5, 5)
Args
shape A TensorShape representing the shape of this tensor, a TensorShapeProto, a list, a tuple, or None.
Raises
ValueError If shape is not compatible with the current shape of this tensor.

__abs__

View source

Computes the absolute value of a tensor.

Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. For a complex number \(a + bj\), its absolute value is computed as \(\sqrt{a^2

  • b^2}\). For example:
x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])
tf.abs(x)
<tf.Tensor: shape=(2, 1), dtype=float64, numpy=
array([[5.25594901],
       [6.60492241]])>
Args
x A Tensor or SparseTensor of type float16, float32, float64, int32, int64, complex64 or complex128.
name A name for the operation (optional).
Returns
A Tensor or SparseTensor of the same size, type and sparsity as x, with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.abs(x.values, ...), x.dense_shape)

__add__

View source

The operation invoked by the Tensor.add operator.

Purpose in the API:

This method is exposed in TensorFlow's API so that library developers
can register dispatching for <a href="../tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle
custom composite tensors & other custom objects.

The API symbol is not intended to be called by users directly and does
appear in TensorFlow's generated documentation.
Args
x The left-hand side of the + operator.
y The right-hand side of the + operator.
name an optional name for the operation.
Returns
The result of the elementwise + operation.

__and__

View source

Logical AND function.

The operation works for the following input types:

  • Two single elements of type bool
  • One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor.
  • Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical AND of the two input tensors.

Usage:

a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False,  True,  True, False])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False,  True])>
Args
x A tf.Tensor type bool.
y A tf.Tensor of type bool.
name A name for the operation (optional).
Returns
A tf.Tensor of type bool with the same size as that of x or y.

__bool__

View source

Dummy method to prevent a tensor from being used as a Python bool.

This overload raises a TypeError when the user inadvertently treats a Tensor as a boolean (most commonly in an if or while statement), in code that was not converted by AutoGraph. For example:

if tf.constant(True):  # Will raise.
  # ...

if tf.constant(5) < tf.constant(7):  # Will raise.
  # ...
Raises
TypeError.

__div__

View source

Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)

Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics.

This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer.

Args
x Tensor numerator of real numeric type.
y Tensor denominator of real numeric type.
name A name for the operation (optional).
Returns
x / y returns the quotient of x and y.

__eq__

View source

The operation invoked by the Tensor.eq operator.

Compares two tensors element-wise for equality if they are broadcast-compatible; or returns False if they are not broadcast-compatible. (Note that this behavior differs from tf.math.equal, which raises an exception if the two tensors are not broadcast-compatible.)

Purpose in the API:

This method is exposed in TensorFlow's API so that library developers can register dispatching for Tensor.eq to allow it to handle custom composite tensors & other custom objects.

The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation.

Args
self The left-hand side of the == operator.
other The right-hand side of the == operator.
Returns
The result of the elementwise == operation, or False if the arguments are not broadcast-compatible.

__floordiv__

View source

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args
x Tensor numerator of real numeric type.
y Tensor denominator of real numeric type.
name A name for the operation (optional).
Returns
x / y rounded down.
Raises
TypeError If the inputs are complex.

__ge__

Returns the truth value of (x >= y) element-wise.

Note: math.greater_equal supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5, 2, 5, 10])
tf.math.greater_equal(x, y) ==> [True, True, True, False]

x = tf.constant([5, 4, 6, 7])
y = tf.constant([5])
tf.math.greater_equal(x, y) ==> [True, False, True, True]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__getitem__

View source

Overload for Tensor.getitem.

This operation extracts the specified region from the tensor. The notation is similar to NumPy with the restriction that currently only support basic indexing. That means that using a non-scalar tensor as input is not currently allowed.

Some useful examples:

# Strip leading and trailing 2 elements
foo = tf.constant([1,2,3,4,5,6])
print(foo[2:-2].eval())  # => [3,4]

# Skip every other row and reverse the order of the columns
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[::2,::-1].eval())  # => [[3,2,1], [9,8,7]]

# Use scalar tensors as indices on both dimensions
print(foo[tf.constant(0), tf.constant(2)].eval())  # => 3

# Insert another dimension
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[:, tf.newaxis, :].eval()) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]]
print(foo[:, :, tf.newaxis].eval()) # => [[[1],[2],[3]], [[4],[5],[6]],
[[7],[8],[9]]]

# Ellipses (3 equivalent operations)
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis, ...].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis].eval())  # => [[[1,2,3], [4,5,6], [7,8,9]]]

# Masks
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[foo > 2].eval())  # => [3, 4, 5, 6, 7, 8, 9]

Notes:

  • tf.newaxis is None as in NumPy.
  • An implicit ellipsis is placed at the end of the slice_spec
  • NumPy advanced indexing is currently not supported.

Purpose in the API:

This method is exposed in TensorFlow's API so that library developers can register dispatching for Tensor.getitem to allow it to handle custom composite tensors & other custom objects.

The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation.

Args
tensor An ops.Tensor object.
slice_spec The arguments to Tensor.getitem.
var In the case of variable slice assignment, the Variable object to slice (i.e. tensor is the read-only view of this variable).
Returns
The appropriate slice of "tensor", based on "slice_spec".
Raises
ValueError If a slice range is negative size.
TypeError If the slice indices aren't int, slice, ellipsis, tf.newaxis or scalar int32/int64 tensors.

__gt__

Returns the truth value of (x > y) element-wise.

Note: math.greater supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5, 2, 5])
tf.math.greater(x, y) ==> [False, True, True]

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.greater(x, y) ==> [False, False, True]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__invert__

Returns the truth value of NOT x element-wise.

Example:

tf.math.logical_not(tf.constant([True, False]))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False,  True])>
Args
x A Tensor of type bool. A Tensor of type bool.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__iter__

View source

__le__

Returns the truth value of (x <= y) element-wise.

Note: math.less_equal supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less_equal(x, y) ==> [True, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 6])
tf.math.less_equal(x, y) ==> [True, True, True]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__len__

View source

__lt__

Returns the truth value of (x < y) element-wise.

Note: math.less supports broadcasting. More about broadcasting here

Example:

x = tf.constant([5, 4, 6])
y = tf.constant([5])
tf.math.less(x, y) ==> [False, True, False]

x = tf.constant([5, 4, 6])
y = tf.constant([5, 6, 7])
tf.math.less(x, y) ==> [False, True, True]
Args
x A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__matmul__

View source

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

A simple 2-D tensor matrix multiplication:

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a  # 2-D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b  # 2-D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7,  8],
       [ 9, 10],
       [11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c  # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58,  64],
       [139, 154]], dtype=int32)>

A batch matrix multiplication with batch shape [2]:

a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a  # 3-D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1,  2,  3],
        [ 4,  5,  6]],
       [[ 7,  8,  9],
        [10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b  # 3-D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
        [15, 16],
        [17, 18]],
       [[19, 20],
        [21, 22],
        [23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c  # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
        [229, 244]],
       [[508, 532],
        [697, 730]]], dtype=int32)>

Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args
a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
b tf.Tensor with same type and rank as a.
transpose_a If True, a is transposed before multiplication.
transpose_b If True, b is transposed before multiplication.
adjoint_a If True, a is conjugated and transposed before multiplication.
adjoint_b If True, b is conjugated and transposed before multiplication.
a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication.
b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication.
name Name for the operation (optional).
Returns
A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

Note This is matrix product, not element-wise product.
Raises
ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

__mod__

View source

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

Note: math.floormod supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor. Has the same type as x.

__mul__

View source

Dispatches cwise mul for "DenseDense" and "DenseSparse".

__ne__

View source

The operation invoked by the Tensor.ne operator.

Compares two tensors element-wise for inequality if they are broadcast-compatible; or returns True if they are not broadcast-compatible. (Note that this behavior differs from tf.math.not_equal, which raises an exception if the two tensors are not broadcast-compatible.)

Purpose in the API:

This method is exposed in TensorFlow's API so that library developers can register dispatching for Tensor.ne to allow it to handle custom composite tensors & other custom objects.

The API symbol is not intended to be called by users directly and does appear in TensorFlow's generated documentation.

Args
self The left-hand side of the != operator.
other The right-hand side of the != operator.
Returns
The result of the elementwise != operation, or True if the arguments are not broadcast-compatible.

__neg__

Computes numerical negative value element-wise.

I.e., \(y = -x\).

Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int8, int16, int32, int64, complex64, complex128.
name A name for the operation (optional).
Returns
A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.negative(x.values, ...), x.dense_shape)

__nonzero__

View source

Dummy method to prevent a tensor from being used as a Python bool.

This is the Python 2.x counterpart to __bool__() above.

Raises
TypeError.

__or__

View source

Returns the truth value of x OR y element-wise.

Note: math.logical_or supports broadcasting. More about broadcasting here
Args
x A Tensor of type bool.
y A Tensor of type bool.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__pow__

View source

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]
Args
x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
name A name for the operation (optional).
Returns
A Tensor.

__radd__

View source

The operation invoked by the Tensor.add operator.

Purpose in the API:

This method is exposed in TensorFlow's API so that library developers
can register dispatching for <a href="../tf/Tensor#__add__"><code>Tensor.__add__</code></a> to allow it to handle
custom composite tensors & other custom objects.

The API symbol is not intended to be called by users directly and does
appear in TensorFlow's generated documentation.
Args
x The left-hand side of the + operator.
y The right-hand side of the + operator.
name an optional name for the operation.
Returns
The result of the elementwise + operation.

__rand__

View source

Logical AND function.

The operation works for the following input types:

  • Two single elements of type bool
  • One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor.
  • Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical AND of the two input tensors.

Usage:

a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False,  True,  True, False])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False,  True])>
Args
x A tf.Tensor type bool.
y A tf.Tensor of type bool.
name A name for the operation (optional).
Returns
A tf.Tensor of type bool with the same size as that of x or y.

__rdiv__

View source

Divides x / y elementwise (using Python 2 division operator semantics). (deprecated)

Note: Prefer using the Tensor division operator or tf.divide which obey Python 3 division operator semantics.

This function divides x and y, forcing Python 2 semantics. That is, if x and y are both integers then the result will be an integer. This is in contrast to Python 3, where division with / is always a float while division with // is always an integer.

Args
x Tensor numerator of real numeric type.
y Tensor denominator of real numeric type.
name A name for the operation (optional).
Returns
x / y returns the quotient of x and y.

__rfloordiv__

View source

Divides x / y elementwise, rounding toward the most negative integer.

The same as tf.compat.v1.div(x,y) for integers, but uses tf.floor(tf.compat.v1.div(x,y)) for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by x // y floor division in Python 3 and in Python 2.7 with from __future__ import division.

x and y must have the same type, and the result will have the same type as well.

Args
x Tensor numerator of real numeric type.
y Tensor denominator of real numeric type.
name A name for the operation (optional).
Returns
x / y rounded down.
Raises
TypeError If the inputs are complex.

__rmatmul__

View source

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

A simple 2-D tensor matrix multiplication:

a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
a  # 2-D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
b  # 2-D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7,  8],
       [ 9, 10],
       [11, 12]], dtype=int32)>
c = tf.matmul(a, b)
c  # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58,  64],
       [139, 154]], dtype=int32)>

A batch matrix multiplication with batch shape [2]:

a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
a  # 3-D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1,  2,  3],
        [ 4,  5,  6]],
       [[ 7,  8,  9],
        [10, 11, 12]]], dtype=int32)>
b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
b  # 3-D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
        [15, 16],
        [17, 18]],
       [[19, 20],
        [21, 22],
        [23, 24]]], dtype=int32)>
c = tf.matmul(a, b)
c  # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
        [229, 244]],
       [[508, 532],
        [697, 730]]], dtype=int32)>

Since python >= 3.5 the @ operator is supported (see PEP 465). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

d = a @ b @ [[10], [11]]
d = tf.matmul(tf.matmul(a, b), [[10], [11]])
Args
a tf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
b tf.Tensor with same type and rank as a.
transpose_a If True, a is transposed before multiplication.
transpose_b If True, b is transposed before multiplication.
adjoint_a If True, a is conjugated and transposed before multiplication.
adjoint_b If True, b is conjugated and transposed before multiplication.
a_is_sparse If True, a is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication.
b_is_sparse If True, b is treated as a sparse matrix. Notice, this does not support tf.sparse.SparseTensor, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.sparse.SparseTensor multiplication.
name Name for the operation (optional).
Returns
A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[..., i, j] = sum_k (a[..., i, k] * b[..., k, j]), for all indices i, j.

Note This is matrix product, not element-wise product.
Raises
ValueError If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

__rmod__

View source

Returns element-wise remainder of division. When x < 0 xor y < 0 is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. floor(x / y) * y + mod(x, y) = x.

Note: math.floormod supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: int32, int64, uint64, bfloat16, half, float32, float64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor. Has the same type as x.

__rmul__

View source

Dispatches cwise mul for "DenseDense" and "DenseSparse".

__ror__

View source

Returns the truth value of x OR y element-wise.

Note: math.logical_or supports broadcasting. More about broadcasting here
Args
x A Tensor of type bool.
y A Tensor of type bool.
name A name for the operation (optional).
Returns
A Tensor of type bool.

__rpow__

View source

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y)  # [[256, 65536], [9, 27]]
Args
x A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
y A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
name A name for the operation (optional).
Returns
A Tensor.

__rsub__

View source

Returns x - y element-wise.

Note: Subtract supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor. Has the same type as x.

__rtruediv__

View source

Divides x / y elementwise (using Python 3 division operator semantics).

Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.

This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv.

x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).

Args
x Tensor numerator of numeric type.
y Tensor denominator of numeric type.
name A name for the operation (optional).
Returns
x / y evaluated in floating point.
Raises
TypeError If x and y have different dtypes.

__rxor__

View source

Logical XOR function.

x ^ y = (x | y) & ~(x & y)

The operation works for the following input types:

  • Two single elements of type bool
  • One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor.
  • Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical XOR of the two input tensors.

Usage:

a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False,  True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False,  True,  True, False])>
Args
x A tf.Tensor type bool.
y A tf.Tensor of type bool.
name A name for the operation (optional).
Returns
A tf.Tensor of type bool with the same size as that of x or y.

__sub__

View source

Returns x - y element-wise.

Note: Subtract supports broadcasting. More about broadcasting here
Args
x A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128, uint32.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns
A Tensor. Has the same type as x.

__truediv__

View source

Divides x / y elementwise (using Python 3 division operator semantics).

Note: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.

This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv.

x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).

Args
x Tensor numerator of numeric type.
y Tensor denominator of numeric type.
name A name for the operation (optional).
Returns
x / y evaluated in floating point.
Raises
TypeError If x and y have different dtypes.

__xor__

View source

Logical XOR function.

x ^ y = (x | y) & ~(x & y)

The operation works for the following input types:

  • Two single elements of type bool
  • One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical XOR with the single element to each element in the larger Tensor.
  • Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical XOR of the two input tensors.

Usage:

a = tf.constant([True])
b = tf.constant([False])
tf.math.logical_xor(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([ True])>
c = tf.constant([True])
x = tf.constant([False, True, True, False])
tf.math.logical_xor(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([ True, False, False,  True])>
y = tf.constant([False, False, True, True])
z = tf.constant([False, True, False, True])
tf.math.logical_xor(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False,  True,  True, False])>
Args
x A tf.Tensor type bool.
y A tf.Tensor of type bool.
name A name for the operation (optional).
Returns
A tf.Tensor of type bool with the same size as that of x or y.

Class Variables

  • OVERLOADABLE_OPERATORS

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/Tensor