W3cubDocs

/TensorFlow 2.4

tf.mixed_precision.experimental.DynamicLossScale

Loss scale that dynamically adjusts itself.

Inherits From: LossScale

Dynamic loss scaling works by adjusting the loss scale as training progresses. The goal is to keep the loss scale as high as possible without overflowing the gradients. As long as the gradients do not overflow, raising the loss scale never hurts.

The algorithm starts by setting the loss scale to an initial value. Every N steps that the gradients are finite, the loss scale is increased by some factor. However, if a NaN or Inf gradient is found, the gradients for that step are not applied, and the loss scale is decreased by the factor. This process tends to keep the loss scale as high as possible without gradients overflowing.

Args
initial_loss_scale A Python float. The loss scale to use at the beginning. It's better to start this at a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised. The default is 2 ** 15, which is approximately half the maximum float16 value.
increment_period Increases loss scale every increment_period consecutive steps that finite gradients are encountered. If a nonfinite gradient is encountered, the count is reset back to zero.
multiplier The multiplier to use when increasing or decreasing the loss scale.
Attributes
increment_period
initial_loss_scale
multiplier

Methods

from_config

View source

Creates the LossScale from its config.

get_config

View source

Returns the config of this loss scale.

update

View source

Updates loss scale based on if gradients are finite in current step.

__call__

View source

Returns the current loss scale as a scalar float32 tensor.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/mixed_precision/experimental/DynamicLossScale