Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient( gradients, inputs, min, max, num_bits=8, narrow_range=False, name=None )
Args | |
---|---|
gradients | A Tensor of type float32 . Backpropagated gradients above the FakeQuantWithMinMaxVars operation, shape one of: [d] , [b, d] , [b, h, w, d] . |
inputs | A Tensor of type float32 . Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape same as gradients . min, max: Quantization interval, floats of shape [d] . |
min | A Tensor of type float32 . |
max | A Tensor of type float32 . |
num_bits | An optional int . Defaults to 8 . The bitwidth of the quantization; between 2 and 16, inclusive. |
narrow_range | An optional bool . Defaults to False . Whether to quantize into 2^num_bits - 1 distinct values. |
name | A name for the operation (optional). |
Returns | |
---|---|
A tuple of Tensor objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max). | |
backprops_wrt_input | A Tensor of type float32 . |
backprop_wrt_min | A Tensor of type float32 . |
backprop_wrt_max | A Tensor of type float32 . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/quantization/fake_quant_with_min_max_vars_per_channel_gradient