View source on GitHub |
Scales per-example losses with sample_weights and computes their average.
tf.nn.compute_average_loss( per_example_loss, sample_weight=None, global_batch_size=None )
Usage with distribution strategy and custom training loop:
with strategy.scope(): def compute_loss(labels, predictions, sample_weight=None): # If you are using a `Loss` class instead, set reduction to `NONE` so that # we can do the reduction afterwards and divide by global batch size. per_example_loss = tf.keras.losses.sparse_categorical_crossentropy( labels, predictions) # Compute loss that is scaled by sample_weight and by global batch size. return tf.nn.compute_average_loss( per_example_loss, sample_weight=sample_weight, global_batch_size=GLOBAL_BATCH_SIZE)
Args | |
---|---|
per_example_loss | Per-example loss. |
sample_weight | Optional weighting for each example. |
global_batch_size | Optional global batch size value. Defaults to (size of first dimension of losses ) * (number of replicas). |
Returns | |
---|---|
Scalar loss value. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/nn/compute_average_loss