W3cubDocs

/TensorFlow 2.3

tf.raw_ops.Requantize

Converts the quantized input tensor into a lower-precision output.

Converts the quantized input tensor into a lower-precision output, using the output range specified with requested_output_min and requested_output_max.

[input_min, input_max] are scalar floats that specify the range for the float interpretation of the input data. For example, if input_min is -1.0f and input_max is 1.0f, and we are dealing with quint16 quantized data, then a 0 value in the 16-bit data should be interpreted as -1.0f, and a 65535 means 1.0f.

Args
input A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
input_min A Tensor of type float32. The float value that the minimum quantized input value represents.
input_max A Tensor of type float32. The float value that the maximum quantized input value represents.
requested_output_min A Tensor of type float32. The float value that the minimum quantized output value represents.
requested_output_max A Tensor of type float32. The float value that the maximum quantized output value represents.
out_type A tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. The type of the output. Should be a lower bit depth than Tinput.
name A name for the operation (optional).
Returns
A tuple of Tensor objects (output, output_min, output_max).
output A Tensor of type out_type.
output_min A Tensor of type float32.
output_max A Tensor of type float32.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/raw_ops/Requantize