W3cubDocs

/TensorFlow 1.15

tf.quantization.fake_quant_with_min_max_vars_per_channel

Fake-quantize the 'inputs' tensor of type float and one of the shapes: [d],

[b, d] [b, h, w, d] via per-channel floats min and max of shape [d] to 'outputs' tensor of same shape as inputs.

[min; max] define the clamping range for the inputs data. inputs values are quantized into the quantization range ([0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true) and then de-quantized and output as floats in [min; max] interval. num_bits is the bitwidth of the quantization; between 2 and 16, inclusive.

Before quantization, min and max values are adjusted with the following logic. It is suggested to have min <= 0 <= max. If 0 is not in the range of values, the behavior can be unexpected: If 0 < min < max: min_adj = 0 and max_adj = max - min. If min < max < 0: min_adj = min - max and max_adj = 0. If min <= 0 <= max: scale = (max - min) / (2^num_bits - 1), min_adj = scale * round(min / scale) and max_adj = max + min_adj - min.

This operation has a gradient and thus allows for training min and max values.

Args
inputs A Tensor of type float32.
min A Tensor of type float32.
max A Tensor of type float32.
num_bits An optional int. Defaults to 8.
narrow_range An optional bool. Defaults to False.
name A name for the operation (optional).
Returns
A Tensor of type float32.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel