class torch.nn.MarginRankingLoss(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean') [source]
Creates a criterion that measures the loss given inputs , , two 1D mini-batch Tensors, and a label 1D mini-batch tensor (containing 1 or -1).
If then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for .
The loss function for each pair of samples in the mini-batch is:
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the field size_average is set to False, the losses are instead summed for each minibatch. Ignored when reduce is False. Default: True
reduction). By default, the losses are averaged or summed over observations for each minibatch depending on size_average. When reduce is False, returns a loss per batch element instead and ignores size_average. Default: True
'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
N is the batch size.reduction is 'none', then .Examples:
>>> loss = nn.MarginRankingLoss() >>> input1 = torch.randn(3, requires_grad=True) >>> input2 = torch.randn(3, requires_grad=True) >>> target = torch.randn(3).sign() >>> output = loss(input1, input2, target) >>> output.backward()
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/generated/torch.nn.MarginRankingLoss.html