W3cubDocs

/TensorFlow 2.3

tf.compat.v1.SparseConditionalAccumulator

A conditional accumulator for aggregating sparse gradients.

Inherits From: ConditionalAccumulatorBase

Sparse gradients are represented by IndexedSlices.

Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator.

Extraction of the average gradient is blocked until the required number of gradients has been accumulated.

Args
dtype Datatype of the accumulated gradients.
shape Shape of the accumulated gradients.
shared_name Optional. If non-empty, this accumulator will be shared under the given name across multiple sessions.
name Optional name for the accumulator.
reduction_type Reduction type to use when taking the gradient.
Args
dtype Datatype of the accumulated gradients.
shape Shape of the accumulated gradients.
accumulator_ref A handle to the conditional accumulator, created by sub- classes
Attributes
accumulator_ref The underlying accumulator reference.
dtype The datatype of the gradients accumulated by this accumulator.
name The name of the underlying accumulator.

Methods

apply_grad

View source

Attempts to apply a sparse gradient to the accumulator.

The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.

A sparse gradient is represented by its indices, values and possibly empty or None shape. Indices must be a vector representing the locations of non-zero entries in the tensor. Values are the non-zero slices of the gradient, and must have the same first dimension as indices, i.e., the nnz represented by indices and values must be consistent. Shape, if not empty or None, must be consistent with the accumulator's shape (if also provided).

Example:

A tensor [[0, 0], [0, 1], [2, 3]] can be represented indices: [1,2] values: [[0,1],[2,3]] shape: [3, 2]

Args
grad_indices Indices of the sparse gradient to be applied.
grad_values Values of the sparse gradient to be applied.
grad_shape Shape of the sparse gradient to be applied.
local_step Time step at which the gradient was computed.
name Optional name for the operation.
Returns
The operation that (conditionally) applies a gradient to the accumulator.
Raises
InvalidArgumentError If grad is of the wrong shape

apply_indexed_slices_grad

View source

Attempts to apply a gradient to the accumulator.

The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.

Args
grad The gradient IndexedSlices to be applied.
local_step Time step at which the gradient was computed.
name Optional name for the operation.
Returns
The operation that (conditionally) applies a gradient to the accumulator.
Raises
InvalidArgumentError If grad is of the wrong shape

num_accumulated

View source

Number of gradients that have currently been aggregated in accumulator.

Args
name Optional name for the operation.
Returns
Number of accumulated gradients currently in accumulator.

set_global_step

View source

Sets the global time step of the accumulator.

The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.

Args
new_global_step Value of new time step. Can be a variable or a constant
name Optional name for the operation.
Returns
Operation that sets the accumulator's time step.

take_grad

View source

Attempts to extract the average gradient from the accumulator.

The operation blocks until sufficient number of gradients have been successfully applied to the accumulator.

Once successful, the following actions are also triggered:

  • Counter of accumulated gradients is reset to 0.
  • Aggregated gradient is reset to 0 tensor.
  • Accumulator's internal time step is incremented by 1.
Args
num_required Number of gradients that needs to have been aggregated
name Optional name for the operation
Returns
A tuple of indices, values, and shape representing the average gradient.
Raises
InvalidArgumentError If num_required < 1

take_indexed_slices_grad

View source

Attempts to extract the average gradient from the accumulator.

The operation blocks until sufficient number of gradients have been successfully applied to the accumulator.

Once successful, the following actions are also triggered:

  • Counter of accumulated gradients is reset to 0.
  • Aggregated gradient is reset to 0 tensor.
  • Accumulator's internal time step is incremented by 1.
Args
num_required Number of gradients that needs to have been aggregated
name Optional name for the operation
Returns
An IndexedSlices holding the value of the average gradient.
Raises
InvalidArgumentError If num_required < 1

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/compat/v1/SparseConditionalAccumulator