An optimizer that applies loss scaling to prevent numeric underflow.
Inherits From: Optimizer
tf.keras.mixed_precision.LossScaleOptimizer( inner_optimizer, dynamic=True, initial_scale=None, dynamic_growth_steps=None )
Loss scaling is a technique to prevent numeric underflow in intermediate gradients when float16 is used. To prevent underflow, the loss is multiplied (or "scaled") by a certain factor called the "loss scale", which causes intermediate gradients to be scaled by the loss scale as well. The final gradients are divided (or "unscaled") by the loss scale to bring them back to their original value.
LossScaleOptimizer
wraps another optimizer and applies loss scaling to it. By default, the loss scale is dynamically updated over time so you do not have to choose the loss scale. The minimize
method automatically scales the loss, unscales the gradients, and updates the loss scale so all you have to do is wrap your optimizer with a LossScaleOptimizer
if you use minimize
. For example:
opt = tf.keras.optimizers.SGD(0.25) opt = tf.keras.mixed_precision.LossScaleOptimizer(opt) var = tf.Variable(1.) loss_fn = lambda: var ** 2 # 'minimize' applies loss scaling and updates the loss sale. opt.minimize(loss_fn, var_list=var) var.numpy() 0.5
If a tf.GradientTape
is used to compute gradients instead of minimize
, you must scale the loss and gradients manually. This can be done with the LossScaleOptimizer.get_scaled_loss
and LossScaleOptimizer.get_unscaled_gradients
methods. For example:
with tf.GradientTape() as tape: loss = loss_fn() scaled_loss = opt.get_scaled_loss(loss) scaled_grad = tape.gradient(scaled_loss, var) (grad,) = opt.get_unscaled_gradients([scaled_grad]) opt.apply_gradients([(grad, var)]) # Loss scale is updated here var.numpy() 0.25
When mixed precision with float16 is used, there is typically no risk of underflow affecting model quality if loss scaling is properly used. See the mixed precision guide for more information on how to use mixed precision.
Args | |
---|---|
inner_optimizer | The tf.keras.optimizers.Optimizer instance to wrap. |
dynamic | Bool indicating whether dynamic loss scaling is used. Defaults to True. If True, the loss scale will be dynamically updated over time using an algorithm that keeps the loss scale at approximately its optimal value. If False, a single fixed loss scale is used and initial_scale must be specified, which is used as the loss scale. Recommended to keep as True, as choosing a fixed loss scale can be tricky. Currently, there is a small performance overhead to dynamic loss scaling compared to fixed loss scaling. |
initial_scale | The initial loss scale. If dynamic is True, this defaults to 2 ** 15 . If dynamic is False, this must be specified and acts as the sole loss scale, as the loss scale does not change over time. When dynamic loss scaling is used, is better for this to be a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised. |
dynamic_growth_steps | With dynamic loss scaling, every dynamic_growth_steps steps with finite gradients, the loss scale is doubled. Defaults to 2000. If a nonfinite gradient is encountered, the count is reset back to zero, gradients are skipped that step, and the loss scale is halved. The count can be queried with LossScaleOptimizer.dynamic_counter . This argument can only be specified if dynamic is True. |
LossScaleOptimizer
will occasionally skip applying gradients to the variables, in which case the trainable variables will not change that step. This is done because the dynamic loss scale will sometimes be raised too high, causing overflow in the gradients. Typically, the first 2 to 15 steps of the model are skipped as the initial loss scale is very high, but afterwards steps will only be skipped on average 0.05% of the time (the fraction of steps skipped is 1 / dynamic_growth_steps
).
LossScaleOptimizer
delegates all public Optimizer
methods to the inner optimizer. Additionally, in methods minimize
and get_gradients, it scales the loss and unscales the gradients. In methods
minimizeand
apply_gradients`, it additionally updates the loss scale and skips applying gradients if any gradient has a nonfinite value.
Hyperparameters can be accessed and set on the LossScaleOptimizer, which will be delegated to the wrapped optimizer.
opt = tf.keras.optimizers.Adam(beta_1=0.8, epsilon=1e-5) opt = tf.keras.mixed_precision.LossScaleOptimizer(opt) opt.beta_1 # Equivalent to `opt.inner_optimizer.beta_1` 0.8 opt.beta_1 = 0.7 # Equivalent to `opt.inner_optimizer.beta_1 = 0.7` opt.beta_1 0.7 opt.inner_optimizer.beta_1 0.7
However, accessing or setting non-hyperparameters is not delegated to the LossScaleOptimizer. In an Adam optimizer, beta_1
is a hyperparameter but epsilon
is not, as the Adam optimizer only calls Optimizer._set_hyper
on beta_1
.
opt.inner_optimizer.epsilon 1e-5 opt.epsilon Traceback (most recent call last): AttributeError: 'LossScaleOptimizer' object has no attribute 'epsilon' opt.epsilon = 1e-4 # This does NOT set epsilon on `opt.inner_optimizer` opt.inner_optimizer.epsilon 1e-5
In the above example, despite epsilon being set on the LossScaleOptimizer, the old epsilon value will still be used when training as epsilon was not set on the inner optimizer.
Raises | |
---|---|
ValueError | in case of any invalid argument. |
Attributes | |
---|---|
dynamic | Bool indicating whether dynamic loss scaling is used. |
dynamic_counter | The number of steps since the loss scale was last increased or decreased. This is None if The counter is incremented every step. Once it reaches |
dynamic_growth_steps | The number of steps it takes to increase the loss scale. This is None if Every |
initial_scale | The initial loss scale. If |
inner_optimizer | The optimizer that this LossScaleOptimizer is wrapping. |
loss_scale | The current loss scale as a float32 scalar tensor. |
get_scaled_loss
get_scaled_loss( loss )
Scales the loss by the loss scale.
This method is only needed if you compute gradients manually, e.g. with tf.GradientTape
. In that case, call this method to scale the loss before passing the loss to tf.GradientTape
. If you use LossScaleOptimizer.minimize
or LossScaleOptimizer.get_gradients
, loss scaling is automatically applied and this method is unneeded.
If this method is called, get_unscaled_gradients
should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer
doc for an example.
Args | |
---|---|
loss | The loss, which will be multiplied by the loss scale. Can either be a tensor or a callable returning a tensor. |
Returns | |
---|---|
loss multiplied by LossScaleOptimizer.loss_scale . |
get_unscaled_gradients
get_unscaled_gradients( grads )
Unscales the gradients by the loss scale.
This method is only needed if you compute gradients manually, e.g. with tf.GradientTape
. In that case, call this method to unscale the gradients after computing them with tf.GradientTape
. If you use LossScaleOptimizer.minimize
or LossScaleOptimizer.get_gradients
, loss scaling is automatically applied and this method is unneeded.
If this method is called, get_scaled_loss
should also be called. See the tf.keras.mixed_precision.LossScaleOptimizer
doc for an example.
Args | |
---|---|
grads | A list of tensors, each which will be divided by the loss scale. Can have None values, which are ignored. |
Returns | |
---|---|
A new list the same size as grads , where every non-None value in grads is divided by LossScaleOptimizer.loss_scale . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/keras/mixed_precision/LossScaleOptimizer