W3cubDocs

/TensorFlow Python

tf.contrib.losses.metric_learning.contrastive_loss

tf.contrib.losses.metric_learning.contrastive_loss(
    labels,
    embeddings_anchor,
    embeddings_positive,
    margin=1.0
)

Defined in tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py.

Computes the contrastive loss.

This loss encourages the embedding to be close to each other for the samples of the same label and the embedding to be far apart at least by the margin constant for the samples of different labels. See: http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf

Args:

  • labels: 1-D tf.int32 Tensor with shape [batch_size] of binary labels indicating positive vs negative pair.
  • embeddings_anchor: 2-D float Tensor of embedding vectors for the anchor images. Embeddings should be l2 normalized.
  • embeddings_positive: 2-D float Tensor of embedding vectors for the positive images. Embeddings should be l2 normalized.
  • margin: margin term in the loss definition.

Returns:

  • contrastive_loss: tf.float32 scalar.

© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/losses/metric_learning/contrastive_loss