This module implements the quantized implementations of fused operations like conv + relu.
class torch.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU2d module is a fused module of Conv2d and ReLU
We adopt the same interface as torch.nn.quantized.Conv2d.
as torch.nn.quantized.Conv2d (Same) –
class torch.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros') [source]
A ConvReLU3d module is a fused module of Conv3d and ReLU
We adopt the same interface as torch.nn.quantized.Conv3d.
Attributes: Same as torch.nn.quantized.Conv3d
class torch.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8) [source]
A LinearReLU module fused from Linear and ReLU modules
We adopt the same interface as torch.nn.quantized.Linear.
as torch.nn.quantized.Linear (Same) –
Examples:
>>> m = nn.intrinsic.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/torch.nn.intrinsic.quantized.html