|View source on GitHub|
Computes the precision of the predictions with respect to the labels.
Compat aliases for migration
See Migration guide for more details.
tf.compat.v1.keras.metrics.Precision, `tf.compat.v2.keras.metrics.Precision`, `tf.compat.v2.metrics.Precision`
tf.keras.metrics.Precision( thresholds=None, top_k=None, class_id=None, name=None, dtype=None )
For example, if
y_true is [0, 1, 1, 1] and
y_pred is [1, 0, 1, 1] then the precision value is 2/(2+1) ie. 0.66. If the weights were specified as [0, 0, 1, 0] then the precision value would be 1.
The metric creates two local variables,
false_positives that are used to compute the precision. This value is ultimately returned as
precision, an idempotent operation that simply divides
true_positives by the sum of
None, weights default to 1. Use
sample_weight of 0 to mask values.
top_k is set, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.
class_id is specified, we calculate precision by considering only the entries in the batch for which
class_id is above the threshold and/or in the top-k highest predictions, and computing the fraction of them for which
class_id is indeed a correct label.
m = tf.keras.metrics.Precision() m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) print('Final result: ', m.result().numpy()) # Final result: 0.66
Usage with tf.keras API:
model = tf.keras.Model(inputs, outputs) model.compile('sgd', loss='mse', metrics=[tf.keras.metrics.Precision()])
| || (Optional) A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is |
| ||(Optional) Unset by default. An int value specifying the top-k predictions to consider when calculating precision.|
| || (Optional) Integer class ID for which we want binary metrics. This must be in the half-open interval |
| ||(Optional) string name of the metric instance.|
| ||(Optional) data type of the metric result.|
Resets all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the metric value using the state variables.
update_state( y_true, y_pred, sample_weight=None )
Accumulates true positive and false positive statistics.
| || The ground truth values, with the same dimensions as |
| || The predicted values. Each element must be in the range |
| || Optional weighting of each example. Defaults to 1. Can be a |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.