class torch.nn.TripletMarginWithDistanceLoss(*, distance_function: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None, margin: float = 1.0, swap: bool = False, reduction: str = 'mean')
[source]
Creates a criterion that measures the triplet loss given input tensors , , and (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”).
The unreduced loss (i.e., with reduction
set to 'none'
) can be described as:
where is the batch size; is a nonnegative, real-valued function quantifying the closeness of two tensors, referred to as the distance_function
; and is a non-negative margin representing the minimum difference between the positive and negative distances that is required for the loss to be 0. The input tensors have elements each and can be of any shape that the distance function can handle.
If reduction
is not 'none'
(default 'mean'
), then:
See also TripletMarginLoss
, which computes the triplet loss for input tensors using the distance as the distance function.
nn.PairwiseDistance
will be used. Default: None
Learning shallow convolutional feature descriptors with triplet losses
by V. Balntas, E. Riba et al. If True, and if the positive example is closer to the negative example than the anchor is, swaps the positive example and the anchor in the loss computation. Default: False
.'none'
| 'mean'
| 'sum'
. 'none'
: no reduction will be applied, 'mean'
: the sum of the output will be divided by the number of elements in the output, 'sum'
: the output will be summed. Default: 'mean'
reduction
is 'none'
, or a scalar otherwise.Examples:
>>> # Initialize embeddings >>> embedding = nn.Embedding(1000, 128) >>> anchor_ids = torch.randint(0, 1000, (1,), requires_grad=True) >>> positive_ids = torch.randint(0, 1000, (1,), requires_grad=True) >>> negative_ids = torch.randint(0, 1000, (1,), requires_grad=True) >>> anchor = embedding(anchor_ids) >>> positive = embedding(positive_ids) >>> negative = embedding(negative_ids) >>> >>> # Built-in Distance Function >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance()) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function >>> def l_infinity(x1, x2): >>> return torch.max(torch.abs(x1 - x2), dim=1).values >>> >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function (Lambda) >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss( >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y)) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: http://www.bmva.org/bmvc/2016/papers/paper119/index.html
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/generated/torch.nn.TripletMarginWithDistanceLoss.html