W3cubDocs

/TensorFlow 2.4

tf.keras.losses.Reduction

Types of loss reduction.

Contains the following values:

  • AUTO: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, we expect reduction value to be SUM or NONE. Using AUTO in that case will raise an error.
  • NONE: Weighted losses with one dimension reduced (axis=-1, or axis specified by loss function). When this reduction type used with built-in Keras training loops like fit/evaluate, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value.
  • SUM: Scalar sum of weighted losses.
  • SUM_OVER_BATCH_SIZE: Scalar SUM divided by number of elements in losses. This reduction type is not supported when used with tf.distribute.Strategy outside of built-in training loops like tf.keras compile/fit.

    You can implement 'SUM_OVER_BATCH_SIZE' using global batch size like:

    with strategy.scope():
      loss_obj = tf.keras.losses.CategoricalCrossentropy(
          reduction=tf.keras.losses.Reduction.NONE)
      ....
      loss = tf.reduce_sum(loss_obj(labels, predictions)) *
          (1. / global_batch_size)
    

Please see the custom training guide
for more details on this.

Methods

all

View source

validate

View source

Class Variables
AUTO 'auto'
NONE 'none'
SUM 'sum'
SUM_OVER_BATCH_SIZE 'sum_over_batch_size'

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/keras/losses/Reduction