Adds a Sum-of-Squares loss to the training procedure.

tf.losses.mean_squared_error( labels, predictions, weights=1.0, scope=None, loss_collection=tf.GraphKeys.LOSSES, reduction=Reduction.SUM_BY_NONZERO_WEIGHTS )

`weights`

acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If `weights`

is a tensor of size `[batch_size]`

, then the total loss for each sample of the batch is rescaled by the corresponding element in the `weights`

vector. If the shape of `weights`

matches the shape of `predictions`

, then the loss of each measurable element of `predictions`

is scaled by the corresponding value of `weights`

.

Args | |
---|---|

`labels` | The ground truth output tensor, same dimensions as 'predictions'. |

`predictions` | The predicted outputs. |

`weights` | Optional `Tensor` whose rank is either 0, or the same rank as `labels` , and must be broadcastable to `labels` (i.e., all dimensions must be either `1` , or the same as the corresponding `losses` dimension). |

`scope` | The scope for the operations performed in computing the loss. |

`loss_collection` | collection to which the loss will be added. |

`reduction` | Type of reduction to apply to loss. |

Returns | |
---|---|

Weighted loss float `Tensor` . If `reduction` is `NONE` , this has the same shape as `labels` ; otherwise, it is scalar. |

Raises | |
---|---|

`ValueError` | If the shape of `predictions` doesn't match that of `labels` or if the shape of `weights` is invalid. Also if `labels` or `predictions` is None. |

The `loss_collection`

argument is ignored when executing eagerly. Consider holding on to the return value or collecting losses via a `tf.keras.Model`

.

© 2020 The TensorFlow Authors. All rights reserved.

Licensed under the Creative Commons Attribution License 3.0.

Code samples licensed under the Apache 2.0 License.

https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/losses/mean_squared_error