W3cubDocs

/TensorFlow Python

tf.contrib.gan.losses.wargs.mutual_information_penalty

tf.contrib.gan.losses.wargs.mutual_information_penalty(
    structured_generator_inputs,
    predicted_distributions,
    weights=1.0,
    scope=None,
    loss_collection=tf.GraphKeys.LOSSES,
    reduction=losses.Reduction.SUM_BY_NONZERO_WEIGHTS,
    add_summaries=False
)

Defined in tensorflow/contrib/gan/python/losses/python/losses_impl.py.

Returns a penalty on the mutual information in an InfoGAN model.

This loss comes from an InfoGAN paper https://arxiv.org/abs/1606.03657.

Args:

  • structured_generator_inputs: A list of Tensors representing the random noise that must have high mutual information with the generator output. List length should match predicted_distributions.
  • predicted_distributions: A list of tf.Distributions. Predicted by the recognizer, and used to evaluate the likelihood of the structured noise. List length should match structured_generator_inputs.
  • weights: Optional Tensor whose rank is either 0, or the same dimensions as structured_generator_inputs.
  • scope: The scope for the operations performed in computing the loss.
  • loss_collection: collection to which this loss will be added.
  • reduction: A tf.losses.Reduction to apply to loss.
  • add_summaries: Whether or not to add summaries for the loss.

Returns:

A scalar Tensor representing the mutual information loss.

© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/gan/losses/wargs/mutual_information_penalty