View source on GitHub |
Scales the sum of the given regularization losses by number of replicas.
tf.nn.scale_regularization_loss( regularization_loss )
Usage with distribution strategy and custom training loop:
with strategy.scope(): def compute_loss(self, label, predictions): per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions) # Compute loss that is scaled by sample_weight and by global batch size. loss = tf.nn.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE) # Add scaled regularization losses. loss += tf.nn.scale_regularization_loss(tf.nn.l2_loss(weights)) return loss
Args | |
---|---|
regularization_loss | Regularization loss. |
Returns | |
---|---|
Scalar loss value. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/nn/scale_regularization_loss