W3cubDocs

/TensorFlow 1.15

tf.distribute.ReductionToOneDevice

View source on GitHub

Always do reduction to one device first and then do broadcasting.

Inherits From: CrossDeviceOps

Batch reduction is done by reduction on each element one by one.

Args
reduce_to_device the intermediate device to reduce to. If None, reduce to the first device in destinations of the reduce() method.
accumulation_fn a function that does accumulation. If None, then tf.math.add_n is used.

Methods

batch_reduce

View source

Reduce PerReplica objects in a batch.

Reduce each first element in value_destination_pairs to each second element which indicates the destinations.

Args
reduce_op Indicates how per_replica_value will be reduced. Accepted values are tf.distribute.ReduceOp.SUM, tf.distribute.ReduceOp.MEAN.
value_destination_pairs a list or a tuple of tuples of PerReplica objects (or tensors with device set if there is one device) and destinations.
Returns
a list of Mirrored objects.
Raises
ValueError if value_destination_pairs is not a list or a tuple of tuples of PerReplica objects and destinations

broadcast

View source

Broadcast the tensor to destinations.

Args
tensor the tensor to broadcast.
destinations the broadcast destinations.
Returns
a Mirrored object.

reduce

View source

Reduce per_replica_value to destinations.

It runs the reduction operation defined by reduce_op and put the result on destinations.

Args
reduce_op Indicates how per_replica_value will be reduced. Accepted values are tf.distribute.ReduceOp.SUM, tf.distribute.ReduceOp.MEAN.
per_replica_value a PerReplica object or a tensor with device set.
destinations the reduction destinations.
Returns
a Mirrored object.
Raises
ValueError if per_replica_value can't be converted to a PerReplica object.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/distribute/ReductionToOneDevice