CosineEmbeddingLoss
-
class torch.nn.modules.loss.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[source] -
Creates a criterion that measures the loss given input tensors , and a
Tensorlabel with values 1 or -1. Use () to maximize the cosine similarity of two inputs, and () otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.The loss function for each sample is:
- Parameters
-
-
margin (float, optional) – Should be a number from to , to is suggested. If
marginis missing, the default value is . -
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:True -
reduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:True -
reduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
-
margin (float, optional) – Should be a number from to , to is suggested. If
- Shape:
-
- Input1: or , where
Nis the batch size andDis the embedding dimension. - Input2: or , same shape as Input1.
- Target: or .
- Output: If
reductionis'none', then , otherwise scalar.
- Input1: or , where
Examples
>>> loss = nn.CosineEmbeddingLoss() >>> input1 = torch.randn(3, 5, requires_grad=True) >>> input2 = torch.randn(3, 5, requires_grad=True) >>> target = torch.ones(3) >>> output = loss(input1, input2, target) >>> output.backward()