Adds a Log Loss term to the training procedure.
tf.compat.v1.losses.log_loss( labels, predictions, weights=1.0, epsilon=1e-07, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS )
weights acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If
weights is a tensor of size
[batch_size], then the total loss for each sample of the batch is rescaled by the corresponding element in the
weights vector. If the shape of
weights matches the shape of
predictions, then the loss of each measurable element of
predictions is scaled by the corresponding value of
| ||The ground truth output tensor, same dimensions as 'predictions'.|
| ||The predicted outputs.|
| || Optional |
| ||A small increment to add to avoid taking a log of zero.|
| ||The scope for the operations performed in computing the loss.|
| ||collection to which the loss will be added.|
| ||Type of reduction to apply to loss.|
| Weighted loss float |
| || If the shape of |
loss_collection argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.