View source on GitHub |
Computes the recall of the predictions with respect to the labels.
Inherits From: Metric
tf.keras.metrics.Recall( thresholds=None, top_k=None, class_id=None, name=None, dtype=None )
This metric creates two local variables, true_positives
and false_negatives
, that are used to compute the recall. This value is ultimately returned as recall
, an idempotent operation that simply divides true_positives
by the sum of true_positives
and false_negatives
.
If sample_weight
is None
, weights default to 1. Use sample_weight
of 0 to mask values.
If top_k
is set, recall will be computed as how often on average a class among the labels of a batch entry is in the top-k predictions.
If class_id
is specified, we calculate recall by considering only the entries in the batch for which class_id
is in the label, and computing the fraction of them for which class_id
is above the threshold and/or in the top-k predictions.
Args | |
---|---|
thresholds | (Optional) A float value or a python list/tuple of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is true , below is false ). One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate recall with thresholds=0.5 . |
top_k | (Optional) Unset by default. An int value specifying the top-k predictions to consider when calculating recall. |
class_id | (Optional) Integer class ID for which we want binary metrics. This must be in the half-open interval [0, num_classes) , where num_classes is the last dimension of predictions. |
name | (Optional) string name of the metric instance. |
dtype | (Optional) data type of the metric result. |
m = tf.keras.metrics.Recall() m.update_state([0, 1, 1, 1], [1, 0, 1, 1]) m.result().numpy() 0.6666667
m.reset_states() m.update_state([0, 1, 1, 1], [1, 0, 1, 1], sample_weight=[0, 0, 1, 0]) m.result().numpy() 1.0
Usage with compile()
API:
model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Recall()])
reset_states
reset_states()
Resets all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.
result
result()
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the metric value using the state variables.
update_state
update_state( y_true, y_pred, sample_weight=None )
Accumulates true positive and false negative statistics.
Args | |
---|---|
y_true | The ground truth values, with the same dimensions as y_pred . Will be cast to bool . |
y_pred | The predicted values. Each element must be in the range [0, 1] . |
sample_weight | Optional weighting of each example. Defaults to 1. Can be a Tensor whose rank is either 0, or the same rank as y_true , and must be broadcastable to y_true . |
Returns | |
---|---|
Update op. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/keras/metrics/Recall